The National Information Technology Development Agency (NITDA) has raised alarm over newly discovered vulnerabilities in ChatGPT that could put Nigerian users at risk of data-leakage attacks. The agency issued the advisory through its Computer Emergency Readiness and Response Team (CERRT.NG), urging businesses, government institutions, and individuals to take immediate precautions.
According to the notice, researchers identified seven vulnerabilities in GPT-4o and GPT-5 models. These flaws allow attackers to manipulate ChatGPT through hidden instructions embedded in websites, comments, or URLs, which can trigger unintended commands during routine browsing, summarization, or search activities. In some cases, attackers can bypass safety measures or even poison the AI’s memory, potentially influencing future interactions.
NITDA emphasized the potential consequences of these vulnerabilities. Users could experience unauthorized actions performed by ChatGPT, exposure of sensitive information, or manipulated outputs. The agency warned that some attacks may occur silently, without any direct interaction from the user, making awareness and caution critical.
To mitigate these risks, NITDA recommended that Nigerians limit or disable ChatGPT’s browsing and summarization of untrusted sites, update deployed GPT models regularly, and enable advanced features like memory or browsing only when necessary. These steps aim to reduce the risk of malicious instructions affecting AI behavior or compromising user data.
This alert follows previous NITDA warnings, including one earlier this year about critical security flaws in eSIM technology affecting billions of devices worldwide. Together, these advisories highlight the growing cybersecurity challenges posed by emerging technologies and reinforce the importance of vigilance, updates, and safe digital practices.
source: nairametrics
