My first podcast for the rundown. Let me know what you think!
This will be the first of a series on AI and how it will impact the media and communications space. Candyce is one of the world’s foremost security experts. The ease of creating fake content with artificial intelligence will create chaos during the presidential elections in the US and around the world. I’ll be thinking through how newsrooms and journalists are going to need to reinvent workflows with technologies, security tools, and skills. This will be a real-time experiment in the fight against malicious content.
Realistic, but not real.
Look out for more AI-generated news anchors, reporters, stories, videos, and news packages to be developed. Here’s an entire TV show all generated by AI.
Here’s the Pope in Balenciaga that went viral. Fake. AI-generated social media influencers like Aitana Lopez who has almost 250,000 followers on Instagram. AI Obama, AI Biden, and AI Trump are in the mix with audio comments made that were not actually theirs.
All of it is dangerous. We can easily be fooled. Fake audio. Fake video. Rumor ‘bombs’. Disinformation. It’s also pretty easy, accessible, and cheap to create this type of audio. It cost me 22 USD a month to tinker around with voices and clone them. Lots of tools are being developed to combat all this. How does a newsroom handle every sort of fakery, poisoned data, and synthetic data in text, video, and audio and retain credibility?
We need to start thinking of newsrooms as intelligence hubs
A new layer of newsrooms needs to be considered. An intelligence hub. Leaders need new skills for their teams. Journalists and information experts will need to manage:
Structured analytic techniques
Make key assumption checks
Develop their own in-house models
Verify the quality of the information
All that information has to be fact-checked properly before dissemination.
A Chain of Custody Unit. How is information gathered, stored, and transported? Could blockchain be the key to authenticating content like video and photos and countering disinformation? I wrote about a company trying to do this in a blog last year. The idea here is essentially a way to prove the integrity of an image from the moment the camera captures a shot. News organizations are in the process of exploring blockchain for chain-of-custody issues. Here are a couple of examples. I know it’s hard to wrap our heads around blockchain. Here is something I wrote to help us learn about blockchain. Scroll a bit.
This is a chain of blocks that have information. It has digital timestamps. Think of it like a notary. No one person or entity manages it. A peer-to-peer network where everyone is allowed to join the blockchain. It’s an open ledger available to anyone. I can look into any transaction on the blockchain and see what wallet activity is going on. Every single transaction is forever recorded in a block. Each block contains data, a hash. Think of this as a fingerprint. It’s always unique. If something changes (a new transaction), then the hash changes. The security of a blockchain comes from its use of a hash.
Source: Midjourney AI Sous-Chef
A Fusion Center. This role would give the first snapshot of the day and all the content coming in. It will help distinguish misinformation from disinformation. Misinformation confirms our view of the world, and therefore we believe it’s real. Disinformation is the deliberate and nefarious manufacturing of lies intended to damage. This is essentially proxy warfare. Information Grading This role within the fusion center checks and double-checks information from various sources using technology and tools and grades it.
An Advanced Analysis Unit. This unit would determine what information is important and what’s not. This is a train analyst who understands information technology. This is not someone with copyediting, writing, grammar, or legal skills. This talent must know how to assess information. How to disseminate information after being put through checks and balances
Here is another highlight from the podcast with Candyce:
Data hygiene is a must for the newsroom of the future.
From a security perspective, the rules still remain the same. When you go home, you are careful about who you let into your house, you are careful about who uses the internet, when they're sitting in your home, you lock your door, you close your curtains in the evening. And if you're ordering stuff online, you use a trusted online provider, you don't just use any and everybody to give them your card. So the same rules apply when it comes to dealing with information. And indeed, for media organisations. Extended data hygiene is important. Who you hire is important. Proper background checks. If you're using technology, and you're using third party AI, software, or AI algorithms, then make sure you know who the developer is. Like actually take the time to learn about the company, what other products they have made and who works for them. Because that gives you the track record that you're looking for. All code. All code, is susceptible. If you're going to be using large language models, then make sure that you have a clearer understanding of where it came from. You don't need to be able to write code, you just need to know the people who wrote the code. That's the main criteria.
Coming Soon:
I’ll offer some thoughts on Open AI Vs New York Times and Midjourney Vs Artists on IP and copyright infringement.
This is the first in our series on AI and the US elections (any election, for that matter). Newsroom leadership, storytelling, and security considerations remain the same. This topic will be a collaborative effort between The Rundown and the Canadian Association for Security and Intelligence Studies, Vancouver. We will be teaching an intro course on therundown.studio for our paid subscribers.
Share this post