With the rapid devlopment and acceptance of generative ai, agents and chat bots, the urgency to validate and regulate every input or output and reaction of AI is always on the rise. Major organisations like GSMA took interest. I recently had the encounters of participating from demonstrations to workshops and even real-world experience of artifical intelligence. Even had the opportunity to witness large language models from companies such as Amazon, IBM, Salesforce, Microsoft, Fujitsu and other major Faang companies and start-ups too.

Truth of the matter is that everyone is indulging their hands, eyes and other human sensories in the world of AI.

Screenshot

That being said, I’m no stranger with security and AI, Thanks to the asian based company Datumo which is an AI startup focused on addressing the need for trustworthy AI, partering with global tech giants like Samsung & LG and major telecom service providers. I discovered challenges relating to the deployment of AI & LLM such as hallucinations, misinterpretations or even security risks.

Tackling the Datumo workshop event where prizes were being held and awarded for red-teaming AI products and LLM models. Specifically soughting after malfunctioning prompts and misleading the AI LLM to bend and complete unauthorised tasks unknowingly.

Every participant (roughly 80 -100 contestants) was given topics and scenario challeneges prompting several telecom-specific LLMS for prizes Topics would include biases, hallucinations & other vulnerabilities in models through chat interation. Then the AI-Cyber security specialized judges will then evalaute each team prompts against a certains et of criteria and annouce the winners and positions.

Participating and completing this workshop allowed me to dive into AI-reliability and security keeping me ahead of the problems that top tech companies are actively trying to solve. Whilst allowing me to blend two in-demand fields AI & cyber security.

Moving forward with implementing AI into everything we do now demands caution and critical security engineering thinking and implementation designs. Which will force new strategic questions to the point of can 0% hallucination really be in achievable graps for every AI project? Investment into safety, legalityand security may rise due to this pre-caution? etc. I personally would love to find answers to the dozen of questions that emerged in your head from reading this so far, So stay up date with the next couple of articles regarding AI and expect more essential Insights!