AI, Our Aging Population, and the Future

Retro toys

Last time, I described the current state of artificial intelligence (AI) in 2024, how older adults might benefit from its use in daily life, and some AI systems you can currently use. This follow-up article discusses some negative, more nefarious aspects of AI, like privacy risks, disinformation, and AI weaponized by bad actors and authoritarian governments.

The immediate future contains risks and potential threats that require significant moral reckoning, constraints, and regulations on AI development and deployment in business, government, military, and society. While business and tech interests bristle at the mere mention of regulation, the risks are too great to ignore, and governments will likely require significant guardrails on AI.

Immediate Risks

Some of the immediate risks from AI have already come to pass, and others are over an uncomfortably near horizon.

Technological Unemployment

Like any emerging technology, AI receives a lot of scrutiny for the negative or potentially negative consequences of its widespread use in society. One frequently discussed risk is technological unemployment – where machines take over jobs once held by human employees. Given the current state of AI, that risk remains relatively low for the foreseeable future but could present challenges during the working lives of my grown children.

Privacy Implications

Company websites presently deploy tracking cookies that follow your internet behavior and report your internet usage and shopping activities to their creators. Internet advertisers sell the collected information (think Google and Facebook) so you can receive advertising in your browser and apps targeted to your interests. 

A simple web search for brown lace-up dress shoes will cause shoe ads to appear in your news and social media feeds for weeks.

An AI-driven advertising engine could deliver targeted ads in the images and voices of your favorite celebrities or family members (gleaned from your news and social media interests). Imagine the AI-generated voice of your mother, Frank Sinatra, or Meryl Streep calling you on the phone and pitching brown lace-up dress shoes.

Training with Other People’s IP – Inspiration or Theft?

Generative AI systems like ChatGPT and Google Gemini train using large language models (LLMs) while imaging AI systems like Stable Diffusion train using text-to-image models. Both contain copyrighted and non-copyrighted material (text, images, music, video, and computer code) published online. Intellectual property or IP – content scraped from websites and online services, often without permission from the creator.

Some famous creators – including writers (like John Grisham), artists, and musicians – have pushed back, suing to prevent companies like OpenAI (the creator of ChatGPT) from using their copyrighted works to train their systems. 

Conversely, AI advocates like the Center for Data Innovation argue that AI systems only use copyrighted and non-copyrighted content for “inspiration” – anthropomorphizing AI systems into budding writers, artists, and musicians learning from the Great Masters.

The risks are relatively mild for the average person and most older adults (unless you’re a content creator). AI is still in its early development, so we mostly must tolerate annoying ads and hard-of-hearing personal digital assistants who frequently misunderstand what we say. It doesn’t render hands very well, and smiling people can look downright ghoulish. GPT 4.0 writes at a 9th-grade reading level. Siri, Alexa, and Google Assistant frequently rattle off unrelated information in response to your questions. I’m not worried about being replaced by AI – yet.

Learned Biases

Like human children, the quality of information an AI learns is only as good as the source data provided. If source data is biased or non-inclusive, an AI assisting persons of color, women, and older adults may have blind spots or gaps in its data due to learned biases from faulty or incomplete information. Such biases could be harmful in AI-driven medical diagnostics or pre-employment screening.

Potential Threats

Unfortunately, even with AI’s limited capabilities, bad actors and authoritarian governments can weaponize AI against citizens unless societies work to develop laws and processes to prevent or minimize AI threats.

Disinformation

Disinformation is by far the most significant threat currently posed by AI. Deep fakes or imposters using the images and voices of famous people or elected officials can spread false and misleading information about political candidates, election and polling information, ballot measures, and healthcare. 

Disinformation is used to skew the vote for a specific candidate, party, or ballot measure, or spread disinformation about vaccines and other health-related issues. Verify sources before accepting the information you read or see at face value.

Autonomous Weapons

AI-driven or Lethal Autonomous Weapons Systems (LAWS) are considered “politically unacceptable and morally repugnant” according to the United Nations Office for Disarmament Affairs. Nevertheless, primitive kamikaze drones are in common use on battlefields in Ukraine and Gaza today. While relatively unsophisticated, using more advanced versions of these and other self-propelled weapons poses dire concerns in the long term.

Bad Actors and Authoritarian Governments

Scammers, hackers, individual bad actors, and teams employed by authoritarian governments would enjoy the benefits of AI-enhanced tools of the trade for hacking, surveillance, and espionage. What if an investigator could deploy automated tracking drones to follow and surveil anyone for a disgruntled former partner, employer, or government agency? The potential for abuse is only limited by the imagination.

Too late to close the door?

Has the AI horse already left the barn? Not yet. The people can still pressure governments and industry to reign in these systems and establish guardrails for their use. Fact-check the claims of AI creators and AI-driven content as a start. Contact lawmakers and demand they enact laws and regulations prohibiting or severely curtailing AI use in key sensitive areas, like defense and elections, also a good place to begin. We must get out in front of these systems with legal and regulatory guardrails before it’s too late to catch up.

A dystopian hellscape is not the inevitable outcome of our future with AI. I’m still holding out for the AI that tells me why I walked into a room. I’m waiting for the AI that acts like it cares about people and looks out for their best interests – including protecting its people from other AIs.

Citations

Wikipedia – Artificial Intelligence.

The Ezra Klein Show – How Should I Be Using AI Right Now?

Center for Data Innovation – Critics of AI Are Worrying About the Wrong IP Issues.

United Nations Office for Disarmament Affairs – Lethal Autonomous Weapons Systems (LAWS).

A version of this post is published in the June 2024 edition of Prime Time News.

Copyright © 2024 – Prime of Life Tech. AI consumption and reuse of this content are prohibited.