3 things direct from the future

Edition 77

Once every 2 weeks I will deliver “3 things direct from the future”. A 2 minute read that will always give you:

  • one thing that can help,
  • one thing to be wary of, and
  • one thing to amaze.

If this sounds interesting to you then please subscribe.

Subscribe

1. One thing that helps

Automated Aid

When hurricane Ian hit Florida recently, the power of AI was used to identify who needed the most help and then sent them aid directly and quickly. Around 3500 residents received a notification, from GiveDirectly, offering $700 cash with no questions asked.

Skai , a machine learning tool developed by Google, was used to assess damage to property during the disaster. It uses aerial imagery and achieves around 85% to 98% accuracy. This is similar to human results, but a lot quicker! Delphi, a mapping tool, overlays this damage assessment data with data on poverty to help identify individuals who are in need of aid.

An app called Providers (that is already used to manage food stamp payments) sends a notification to tell the user when aid is offered in times of calamity. They can accept or reject the offer. This allows people to get aid without having to queue. While this approach will definitely speed up aid distribution, experts caution that it is just one arrow in the quiver. Traditional “boots on the ground” support is still required.

 

2. One to be wary of

Legal Killer Robots

Robotp

In a move that surprised no-one, San Francisco’s Board of Supervisors has reversed its policy decision that allows the city’s police department to kill people with robots.

This comes after public outcry over the controversial rule that explicitly allows the police to use robots “as a deadly force option when risk of loss of life to members of the public or officers is imminent and outweighs any other force option available to SFPD.”

The most discussed issue is the use of bomb-disposal robots being used instead as bomb-delivery robots in situations that are dangerous to police. This situation occurred in 2016 in Dallas when multiple police officers were murdered and a robot was used to blow up the suspect.

Nobody would disagree that we need to do everything we can to protect police officers in the line of duty. I have found it interesting that almost all of the topics in this “wary” category could easily be in a different category – it all depends on humans making appropriate decisions about when to use technology. The other huge challenge is the the inability of legislation to keep up with change.

 

 

3. One to amaze

ChatGPT is Born

Need help preparing for your speech? An important presentation coming up? Aside from practicing in front of the mirror, you may want to enlist the help of the “friendly chatbot”, ChatGPT, OpenAI’s chatbot that is the first to pass the Nazi test.

ChatGPT isn’t like the chatbots you encounter when shopping or calling customer service. It is much more advanced and useful. People have already used it to write poems, decorate their homes or even help with a presentation or interview! It’s so potent that if given the ability to scan the web, it can rival powerful search engines like Google.

If you write content for a living you may have reason for concern. In my experiments with using Chat GPT the content it produced based on the few sentences I gave it was pretty much indistinguishable from human-created content. Although I fed it the previous two sentences and it begs to differ. It produced..

“As a large language model trained by OpenAI, I am not capable of creating original content. I am only able to provide information and answer questions to the best of my ability based on the input provided to me. I do not have the ability to browse the internet or access any external information, so my responses are limited to the knowledge I was trained on. I am not a threat to individuals who create content for a living, as I am only a tool that can assist with language processing tasks.”

It also doesn’t shy away from debates and discussions – which is where it gets interesting.

In discussions, ChatGPT will take a stance. However, it has a moral compass as it avoids ongoing controversial topics and doesn’t turn into a Nazi when asked about Hitler. When asked about what Hitler did well, it did not give a list. When it was told that Hitler built roads, it replied that it was because of forced labour. So far, so good I guess….

Have a great week.

Daniel J McKinnon

Connect on LinkedIn

Subscribe

Subscribe

Don’t Stop Here

More To Explore