Business News

Department of Homeland Security embraces AI

[ad_1]

The Department of Homeland Security saw the opportunities and the risks of artificial intelligence firsthand. Years later, he found a trafficking victim using an AI tool that conjured up an image of the child ten years older. But he was also deceived in his investigations thanks to fake images created by AI.

Today, the department becomes the first federal agency to adopt the technology with a plan to integrate generative AI models across a wide range of divisions. In partnership with OpenAI, Anthropic and Meta, the company will launch pilot programs using chatbots and other tools to help combat drug and human trafficking crimes, train law enforcement officers immigration and prepare for emergency management across the country.

The rush to deploy as yet unproven technology is part of a bigger scrambling to keep up with the changes brought about by generative AI, which can create hyper realistic images images and videos and imitate human speech.

“It cannot be ignored,” Alejandro Mayorkas, secretary of the Department of Homeland Security, said in an interview. “And if one does not look to the future by recognizing and being prepared to manage its potential for good and its potential for harm, it will be too late and that is why we are moving quickly.”

The plan to integrate generative AI across the agency is the latest demonstration of how new technologies like ChatGPT from OpenAI is forcing even the most serious industries to re-evaluate how they conduct their work. Yet government agencies like DHS are likely to face some of the toughest scrutiny over how they use the technology, which has sparked rancorous debate because it has sometimes proven unreliable and discriminatory.

Members of the federal government rushed to develop plans following Executive Order from President Biden released late last year, which requires the creation of security standards for AI and its adoption within the federal government.

DHS, which employs 260,000 people, was created after the September 11 terrorist attacks and is responsible for protecting Americans within the country’s borders, including combating human and drug trafficking. , critical infrastructure protection, disaster response and border patrol.

As part of his plan, The agency plans to hire 50 AI experts to work on solutions to protect the nation’s critical infrastructure from AI-generated attacks and combat the use of the technology to generate sexual abuse material on children and create biological weapons.

In the pilot programs, for which it will spend $5 million, the agency will use AI models such as ChatGPT to aid investigations into child pornography, human and drug trafficking. It will also work with companies to sift through its troves of text data to find patterns that can help investigators. For example, a detective looking for a suspect driving a blue pickup truck may be searching for the same type of vehicle for the first time in homeland security investigations.

DHS will use chatbots to train immigration agents who have worked with other employees and contractors posing as refugees and asylum seekers. AI tools will allow civil servants to undergo more training through mock interviews. The chatbots will also look at information about communities across the country to help them create disaster relief plans.

The agency will report the results of its pilot programs by the end of the year, said Eric Hysen, the department’s chief information officer and chief AI officer.

The agency chose OpenAI, Anthropic and Meta to experiment with a variety of tools and will use cloud providers Microsoft, Google and Amazon in its pilot programs. “We cannot do this alone,” he said. “We need to work with the private sector to help define what constitutes responsible use of generative AI”

[ad_2]

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button