• Post category:StudyBullet-19
  • Reading time:8 mins read


How to defend against phishing emails, deepfakes, and many other types of social engineering done with generative AI.

What you will learn

How to prevent phishing/social engineering attempts performed with generative AI

How to identify generative/false/synthetic content

How to develop new defense mechanisms against these new generative threats

How to integrate new defense mechanisms into your current organization

Why take this course?

DON’T GET ENGINEERED

Social engineering, such as phishing, is one of the biggest problems for corporations – and users – in terms of security.

And the advent of generative AI… has just made it worse.

In the world of today, companies and individuals must be able to not only resist social engineering and phishing, but resist it when it leverages generative AI – which means faster, larger-scale, more sophisticated attacks.


Get Instant Notification of New Courses on our Telegram channel.


This course will teach you how to protect against social engineering, when social engineering is accelerated by generative AI.

LET ME TELL YOU… EVERYTHING.

Some people – including me – love to know what they’re getting in a package.

And by this, I mean, EVERYTHING that is in the package.

So, here is a list of everything that this course covers:

  • You’ll learn the basics of generative AI and what it can do, including common models and families of models, the characteristics of generative content, and how it can be misused due to negligence or active malevolence (including biases, misinformation, impersonation and more);
  • You’ll learn the basics of social engineering and what it consists of (manipulating someone to access information you otherwise would not), including common approaches and the facts that enable it (social norms, weak OPSEC, etc);
  • You’ll learn the basics of social engineering with generative AI, including how it accelerates approaches (more sophisticated attacks, faster attacks, on larger scales, with micro-targeting), the major approaches that are affected (such as impersonation or more convincing pretexts), and the major defenses that are also affected (sophisticated detection mechanisms, MFA, behavioral analytics, faster IR, etc);
  • You’ll learn about an overview of the major generative content types used in social engineering attacks (text, image, audio and video), including the specific approaches that each leverage, the model training requirements and data required for attackers to train such models, and how each type can be detected;
  • You’ll learn about generative text, including the models that enable it (e.g., LLMs), the usual distribution channels (messages, emails, social media profiles), the required data to train such models (text samples, including specific ones), and how it can be detected (inconsistencies in facts, spotting specific text styles and patterns, detecting emotional manipulation patterns);
  • You’ll learn about generative image, including the models that enable it (e.g. GANs, diffuser models, VAEs), the usual distribution channels (social media or specific platforms, such as for false documents), the required data to train such models (a variety of images, possibly of specific people or documents), and how it can be detected (artifacts, elements that meld into each other, doing reverse image searches, etc);
  • You’ll learn about generative audio, including the models that enable it (e.g., TTS models, GANs), the usual distribution channels (VoIP or cellular calls, messaging apps, social media posts), the data required to train such models (audio samples, possibly of a specific individual), and how it can be detected (mismatches in speech patterns, accent, tone, or with automated detectors);
  • You’ll learn about generative video, including the models that enable it (e.g. GANs, deep learning video models, motion transfer models), the usual distribution channels (video platforms such as YouTube/Vimeo, social media such as FB/IG/TikTok, or publications/news outlets), the required data to train such a model (a variety of footage, including possibly of a specific person or situation), and how it can be detected (mismatches in gestures, facial expressions, lack of synchronization in lip movement, etc);
  • You’ll learn about the advanced impersonation approach, where fraudsters impersonate someone, such as via text, or with an audio/video deepfake, as well as how it’s executed, the specific types of consequences it has and how to defend against it;
  • You’ll learn about the hyper-personalization approach, where fraudsters create messages or bait that is targeted at a person’s specific tastes or preferences, as well as how it’s executed, the specific types of consequences it has and how to defend against it;
  • You’ll learn about the emotional manipulation approach, where fraudsters create content made to polarize someone in terms of emotions (positive or negative), to get them to make a rash decision without using logic, as well as how it’s executed, the specific types of consequences it has and how to defend against it;
  • You’ll learn about advanced pretexting, where the fraudster uses an excuse/pretext to obtain information – but a very realistic one created with generative AI – as well as how it’s executed, the specific types of consequences it has and how to defend against it;
  • You’ll learn about automated/scalable attacks, where fraudsters simply overwhelm defenses by launching attacks en masse, causing disruption and straining resources, as well as how it’s executed, the specific types of consequences it has and how to defend against it;
  • You’ll learn about defending your organization with awareness and training, but specifically educating employees on the specific social engineering approaches that leverage generative AI, as well as including these in training programs, and motivating employees to be skeptical and report suspicious situations without pushback;
  • You’ll learn about defending your organization with text corroboration, verifying facts and context in communications sent, either with manual search or automated retrieval of facts, as well as pointers that can be used to identify suspicious inconsistencies in generative text (in the conclusions, in the facts, in the possible lack of congruence with similar communications, etc);
  • You’ll learn about defending your organization with mannerism analysis, analyzing nuances and incongruencies in someone’s speech patterns, facial expressions, and/or body language gestures and posture, identifying telltale signs of AI-generated audio and video;
  • You’ll learn about defending your organization with identify verification measures – a practice that is standard, but that is not enough anymore, as-is, in a world where fraudsters can imitate someone’s likeness in a realistic manner;
  • You’ll learn about defending your organization with technological defenses that can automate some of the flagging and removal of generative content, including content analysis tools, automated deepfake detectors, and/or behavioral analysis tools that can detect anomalies in behavior;
  • You’ll learn about defending your organization with policies and culture, defining specific types of generative threats and controls for each, defining strict processes with no exceptions, and promoting a culture of reporting suspicious actions (even with high-status clients and executives!);
  • You’ll learn about what changes, in an organization’s defense strategy, due to generative AI threats – what are the defense mechanisms that stay the same in this “new world”, and what are the defense mechanisms that are, additionally, necessary due to new generative threats;
  • You’ll learn about an overview of the detection and triage of attacks for generative threats, including calculating risk levels for generative threats, prioritizing these threats and dealing with them, as well as the general process of detecting and integrating these threats in the organization;
  • You’ll learn about an overview of responding to, and recovering from, social engineering attacks with generative AI, including steps such as containing or mitigating these threats, doing in-depth investigations, recovering from these, and making changes to defense mechanisms based on feedback;
English
language