1. One thing that helps
The rise of Donald Trump to US President was the clearest indication yet that social media is the most powerful influence tool we have ever seen. Through a combination of savvy use of Twitter, and the explosion of disinformation campaigns run by groups with a stake in the result, he rose from a laughable candidate to take the most powerful office in the world.
The spread of disinformation just keeps growing. It takes place online but these campaigns have a huge real-world impact. So how do we guard against it? The key might be the AI system RIO, or Reconnaissance of Influence Operations program developed by MIT Lincoln Laboratory.
RIO combines techniques to detect disinformation and accounts which share it. It can detect both bot and human activity in social networks and importantly RIO also measures how much such accounts cause the social network to change as a whole. It also analyses factors such as accounts that interact with foreign media and detects similarities with other accounts known to engage in influence behaviour.
Why is this important? This technique can be used to quantify the level of disinformation being spread and discover who. It could be used by both government and industry as well as helping social media platforms to detect accounts engaging in disinformation.
Disinformation campaigns are ultimately destructive whether they incite social tensions, undermine elections or dissuade people from established medical facts. Targeting malicious accounts that purposely spread lies is the sort of thing that AI is brilliantly suited for.
2. One to be wary of
You would think that those dealing with nuclear weapons would be the most security-aware people. Governments go to great lengths to keep information about them and the facilities that house them secret and secured. But it turns out, security protocols are defeated by the humble flashcard.
American soldiers working on secret” nuclear bases in Europe used web-based flashcards to help them memorize sensitive information. Information such as security camera locations, exact location of modems connected to the vaults, passwords, usernames and more were exposed on public websites that help people memorise facts using flashcards.
Bellingcat, an investigative journalism website, uncovered this not-so-secret information using Google search. While the Pentagon worked immediately to remove such secrets from the web, it is possible to rediscover them again using the Wayback Machine, an archive of historical websites. Remember, once you upload something, it stays on the internet for longer than you’d expect.
What’s the point? Systems are only as secure as the people using them. Taking information out of a secure system, and copying it into public online systems defeats all the effort that went into protecting it. This is why, important as it is to maintain strong software security, we must also make sure that people are well educated about the risks of data exposure.
3. One to amaze
Can’t stand the constant hours of Zoom calls? Feel disconnected staring at 2D images all day? With millions of people affected by travel restrictions, video call software has helped to bridge the gap. Google wants to step up their game by making you feel like you’re there, together.
Project Starline is making video calls more realistic by making you see the other person in 3D and life-size. It’s truly realistic as you can see by the genuine reaction of people in this video to this new experience:
Three basic processes are implemented to make all this happen: capturing the fine details of the participants in video, compressing it using existing network infrastructure and then, rendering it in 3D and in real time. The result is a video call that feels like an in-person conversation.
As Google says:
“One of the things we are most proud of is that as soon as you sit down and start talking, the technology fades into the background, and you can focus on what’s most important: the person in front of you.”