At least 500 Space Force staff members have been affected, according to the department’s former chief software officer.
The United States Space Force has temporarily banned its staff from using generative artificial tools while on duty to protect government data, according to reports.
Space Force members were informed that they “are not authorized” to web-based generative AI tools — to create text, images and other media unless specifically approved, according to an Oct. 12 report by Bloomberg, citing a memorandum addressed to the Guardian Workforce (Space Force members) on Sept. 29.
“Generative AI will undoubtedly revolutionize our workforce and enhance Guardian’s ability to operate at speed,” Lisa Costa, Space Force’s deputy chief of space operations for technology and innovation, reportedly said in the memorandum.
However, Costa cited concerns over current cybersecurity and data handling standards, explaining that AI and large language model (LLM) adoption needs to be more “responsible.”
The United States Space Force is a space service branch of the U.S. Armed Forces tasked with protecting the U.S. and allied interests in space.
US Space Force has temporarily banned the use of web-based generative artificial intelligence tools and so-called large language models that power them, citing data security and other concerns, according to a memo seen by Bloomberg News.https://t.co/Rgy3q8SDCS
— Katrina Manson (@KatrinaManson) October 11, 2023
The Space Force’s decision has already impacted at least 500 individuals using a generative AI platform called “Ask Sage,” according to Bloomberg, citing comments from Nick Chaillan, former chief software officer for the United States Air Force and Space Force.
Chaillan reportedly criticized the Space Force’s decision. “Clearly, this is going to put us years behind China,” he wrote in a September email complaining to Costa and other senior defense officials.
“It’s a very short-sighted decision,” Chaillan added.
Chaillan noted that the U.S. Central Intelligence Agency and its departments have developed generative AI tools of their own that meet data security standards.
Related: Data protection in AI chatting: Does ChatGPT comply with GDPR standards?
Concerns that LLMs could leak private information to the public have been a fear for some governments in recent months.
Italy temporarily blocked AI chatbot ChatGPT in March, citing suspected breaches of data privacy rules before reversing its decision about a month later.
Tech giants such as Apple, Amazon and Samsung are among the firms that have also banned or restricted employees from using ChatGPT-like AI tools at work.
Collect this article as an NFT to preserve this moment in history and show your support for independent journalism in the crypto space.
Magazine: Musk’s alleged price manipulation, the Satoshi AI chatbot and more
You can get bonuses upto $100 FREE BONUS when you:
💰 Install these recommended apps:
💲 SocialGood - 100% Crypto Back on Everyday Shopping
💲 xPortal - The DeFi For The Next Billion
💲 CryptoTab Browser - Lightweight, fast, and ready to mine!
💰 Register on these recommended exchanges:
🟡 Binance🟡 Bitfinex🟡 Bitmart🟡 Bittrex🟡 Bitget
🟡 CoinEx🟡 Crypto.com🟡 Gate.io🟡 Huobi🟡 Kucoin.
Comments