There’s a lot of misinfo AI going round social media… how can people tell?

  • First, I’ll give you the answer to your question. Then I’ll add some commentary.


    To identify AI-generated misinformation on social media, users should employ a combination of critical thinking, technical tools, and awareness of common tactics:

    1. Analyze visual anomalies in images
    Look for AI-generated image flaws like distorted hands (extra fingers, fused digits), unnatural lighting, or illogical object arrangements. Pages posting large volumes of stylistically similar content (e.g., “wood carving phases” followed by “sushi art phases”) may indicate AI content farms.

    2. Verify sources and cross-check claims
    Search key phrases or use reverse image search tools (Google Lens) to check image origins. Reputable news organizations rarely break stories exclusively through random social media posts – absence of coverage elsewhere is a red flag.

    3. Assess emotional manipulation tactics
    Content provoking strong outrage, fear, or excitement deserves extra scrutiny. AI-generated political deepfakes often use inflammatory language alongside synthetic media.

    4. Investigate account behavior
    Suspicious accounts may exhibit:

    • Excessive emoji/hashtag use
    • Repetitive phrasing patterns
    • Rapid follower growth without clear cause
    • Coordinated posting across multiple pages

    5. Leverage media literacy resources
    Use tools like the News Literacy Project for verification techniques. Platforms like NewsGuard track AI-enabled fake news sites, which increased 10x in 2023.

    6. Assume synthetic content is possible
    Even professional-looking videos could be AI-generated (e.g., OpenAI’s Sora). For text, AI detectors are unreliable – focus on content plausibility and sourcing instead.

    7. Report suspicious content
    Flag posts to platform moderators and share debunking resources in comments when safe. Collaborative efforts between tech companies, fact-checkers, and governments are improving detection systems.

    Critical first steps:

    • Pause before sharing emotionally charged content
    • Verify through independent searches
    • Consider motives – does this content benefit scammers/spammers?

    Now for the commentary.

    The above was generated by a degenerative AI. (Perplexity to be specific.) Perhaps an expert in the field might tell that it’s LLM output, but I sure as Hell couldn’t. And that’s where the problem lies. It was once trivial to spot content generated by LLMs because the hallucinations were egregious I didn’t fact-check everything in this case (like do NewsGuard, Sora, and the News Literacy Project even exist?), but if someone posted that to me I would not bother fact-checking because it’s very plausible on the face of it.

    For images there’s one tell that the bot didn’t provide. Because of the way image generation works, there’s an unnatural evenness of light and dark spaces in AI imagery. Once you’ve trained yourself a bit on known AI images compared with equivalent non-AI images you can’t unsee it. But it’s subtle and you have to explicitly look for it. It’s not something that’s going to jump out at you. The metadata checks others mentioned here, and the motivational checks are probably a better source.