Cybersecurity and AI: Challenges and Opportunities Par :Alana Walker October 6, 2023 Estimated reading time: 5 minutes. The state of advancement in AI is doubling every six to 10 months and creating new cybersecurity challenges. From ChatGPT and Bard to facial recognition, cybersecurity professionals have their hands full, and there aren’t enough hands. With a seemingly negative outlook on AI, cybersecurity, Canada’s job shortage, and the future in general, it can be easy to look at new technology with a pessimistic eye. While it’s true that businesses should be exercising caution when it comes to these new developments, it isn’t all bad news. We dig into the challenges and opportunities AI presents for cybersecurity. ChatGPT and generative learning models can be used for the worst According to Forbes, generative AI tools like ChatGPT, Bard, and other forms of language modelling could affect how information is presented and made available by search engines. How? By automating the crafting of convincing but misleading text to use in influence operations. And that’s just the beginning. 99 problems with ChatGPT Look, if you’ve read our other articles, you’ve heard us rat on ChatGPT. It goes without saying that AI can be helpful, from meal planning to spitting out ten different subject lines for that email you’re stuck writing. However, the architecture of this and other AI models poses significant security concerns. While OpenAI maintains that safety has been built into its AI tools and its founders will do everything in their power to prevent misuse of their product, the tech industry and cybersecurity experts are worried. ChatGPT is unwittingly giving more power to those with little coding knowledge and a major desire to cause harm. Low-level hackers can develop basic code on the site accurate enough to pull off a minor attack. Earlier this year, Blackberry released a survey showing that 74% of IT decision-makers surveyed are concerned about ChatGPT’s potential threat to cybersecurity. Of that, 51% believe a successful cyberattack will be credited to ChatGPT within the year. While opinions on what exactly this will look like differ, many of those surveyed believe that the number one global concern is ChatGPT’s ability to write believable phishing emails and use it to spread misinformation. On top of that, the Canadian Centre for Cybersecurity released a publication highlighting some of the risks of generative AI like ChatGPT: Creating realistic content, making it harder to identify phishing emails or scams. Threat actors can also write targeted spear phishing attacks with a higher level of sophistication. Users may unintentionally share private information that malicious actors can then use. Cybercriminals can bypass generative AI restrictions to create malware. Other risks include: Creating malware for use in targeted cyberattacks. Using disinformation as fraudulent campaigns against individuals and organizations. Deliberately or accidentally introducing buggy code in software development. Injecting malicious code into datasets could up the chance of large-scale supply-chain attacks. Stealing corporate data faster and in large quantities. Facial recognition, AI, and navigating murky waters No matter how you feel about it, facial recognition technology (FRT) is becoming increasingly integrated into our everyday lives, from the mundane task of unlocking your iPhone to its sometimes dubious use in police surveillance. The global facial recognition market is forecasted to reach US$12.67 billion by 2028. While some may see FRT as totally private, like Google categorizing your photos or a company storing security camera footage, much of the FRT captured could be broadcasted to the world. Now, images are captured that aren’t just stored locally but are potentially shared publicly. Not only that, but malicious actors can easily scrape facial information from databases and do whatever they want with them. For example, China is leveraging FRT to judge citizens’ behaviour to adjust each person’s social credit score. Some of the main concerns with the fast development of FRT are: Lack of consent. Unencrypted faces. Faces are becoming easier to capture at longer distances. They also can’t be encrypted like other forms of data. Lack of transparency. Using FRT without an individual’s consent is a huge privacy concern. Technical vulnerabilities. One can create masks (aka spoofs) from just the digital imagery. It also allows easy access for those looking to use deepfake technology. Inaccuracy. An FRT could misidentify someone, leading to wrongful arrests and other negative outcomes. Additionally, racial minorities are more likely to be misidentified, adding more strain on an already vulnerable people group. Fortunately, some governing powers are trying to curb FRT’s encroaching powers, adding accountability and privacy to the technology. Pittsburgh and the state of Virginia require prior legislative approval to deploy FRT. Both Massachusetts and Utah need law enforcement to submit a written request before conducting facial recognition research. It all comes down to consent, privacy, and transparency. Companies should be frank when enrolling customers in FRT for verification purposes. Enterprises should provide consumers with detailed notice about how the FRT templates were developed and how much data will be used, shared, or destroyed. Above all, the implementation of FRT technology needs to come with hefty cybersecurity safeguards. For this, we need more experts. Cybersecurity talent gap creates vulnerabilities Cyber crime in Canada is reaching crisis levels. The government estimates that this causes more than $3 billion in damage each year. Any company is vulnerable, and businesses big and small are scrambling to find qualified professionals to strengthen their digital defences. With the demand for cybersecurity professionals doubling yearly, the immediate need requires a better approach to training cyber talent. “Education and skilling-focused programs and tools like Explore and Career Ready are helping to equip and empower the next generation of digital leaders with the essential skills our economy needs to thrive in the future,” Kevin Magee, Chief Security Officer of Microsoft Canada, says. Programs like Lighthouse Labs Cybersecurity Program are equipping individuals for the workforce in a matter of months, helping to close the cyber security talent gap. We think Kevin would approve. AI: a partner, not an adversary, for cybersecurity Another way to fill in the missing cybersecurity puzzle pieces is to leverage already available artificial intelligence. While the industry awaits more qualified professionals, Computers are really good at doing one thing most humans hate - the same menial tasks repeatedly and consistently. This frees the humans to handle the more complicated pattern matching with nuance and curiosity. Creating jobs that fit these standards will help close the talent gap. Handing automation tasks over to AI could create new positions for humans focused on oversight or decision-making. While AI may seem like a negative for jobseekers, cybersecurity has always been more than just data crushing; the field has always required a large range of skills, some of which can now be relegated to robots. Some of those skills which belong to the human sphere are: Expert problem-solving skills. Cybersecurity is constantly changing, so creative thinking is critical. Communication skills. You’ll work alongside technical and non-technical teams, so you’ll need to know how to formulate complex jargon-filled ideas into easy-to-understand commonspeak. Researching. The best of the best stay on top of developments and trends - including AI changes. Business know-how. Depending on your role, you might need to explain recommendations that include business needs. Technical knowledge and attention to detail. You must be able to install and maintain computer systems. Your technical understanding is something that AI can’t replace. All in all, cybersecurity hopefuls and current employees shouldn’t be worried about AI stealing their jobs; rather, their focus should be on stopping malicious actors from using AI for their own gain. Beyond that, learning which tasks AI can automate can save you time that you can use to perform more complex technical and problem-solving tasks. At Lighthouse Labs, we exist to make tech-enabled change an opportunity for all. We believe in the power of artificial intelligence (AI) to help people and make our world a better place. We recognize that the responsible and ethical development and use of AI is paramount to its success in driving progress.