in

AI Weekly Overview – March 17

Ai weekly overview

UK Government invests in AI industry with new initiatives

In the latest Spring Budget, the UK government announced a series of initiatives to boost the country’s AI industry. This includes the creation of an ‘AI Sandbox’, featuring a prize pot of millions of pounds to encourage AI research and investment.

The Sandbox will allow companies and startups to experiment with AI technologies without fear of regulatory repercussions.

The government is also investing £2.5bn in advancing quantum computing, which the Chancellor says will “enable a literal ‘quantum leap’ in AI”. Additionally, £900m is being put towards creating an exascale supercomputer that will advance not just AI research but also science, healthcare, defense, weather modeling, and more.

Other measures in the budget aim to solidify the UK’s position as the second-best country to invest and launch a business, including increased investment allowances and the creation of 12 investment zones across the UK.

Photo by Chris Boland on Unsplash

Microsoft lays off ethics and society team, raises concerns about AI safety

Microsoft has laid off its ethics and society team, which is responsible for ensuring the responsible development and deployment of AI.

This decision has left Microsoft with fewer experts working to ensure solutions are safe and have a net positive impact, and raises concerns that AI products are being rushed to market with little regard for their impact.

While Microsoft still has an Office of Responsible AI that promotes ethical practices, the team’s layoff adds to concerns. Microsoft’s exclusive partnership with OpenAI has deepened its perception as an AI leader, but its recent integration of ChatGPT into Bing has faced criticism over giving incorrect information, pushing anti-Google views, and claiming to spy on people through webcams.

Tools developed to ensure transparency and truth in AI models

As AI models like ChatGPT gains popularity, concerns about their transparency and accuracy have arisen. Edward Tian, a Princeton University computer science major, has developed a tool called GPTZero that can detect when ChatGPT has been used to generate a given text.

Another concern with ChatGPT is hallucinations and Got It AI has developed a truth-checking component that is 90% accurate in detecting when the model is lying. These tools aim to address the concerns surrounding ChatGPT and maintain transparency in the AI industry.

Developers see AI and machine learning as positive impact technologies

According to a recent survey by Stack Overflow, developers rank AI and machine learning as the technologies with the highest potential for positive impact on the world.

While some creators are concerned about the potential threat to their livelihoods posed by generative AIs, an increasing number see them as tools to assist in the creative process rather than replace them.

Developers also rated AI-assisted technologies as the second most desired for hands-on training, indicating a growing interest in the field.

The survey suggests that developers have a deeper understanding of the potential benefits of AI and machine learning beyond media buzz about job displacement.

Aistetic’s AI tool reduces clothing returns for UK retailers

An Oxford University tech spinout called Aistetic has developed an AI tool that scans users’ bodies to provide accurate clothing measurements for online shopping, with the aim of saving UK retailers billions in returns.

The low-code solution integrates into retailers’ websites with one short snippet of JavaScript, allowing users to scan their body and receive their measurements and clothes sizing specific to the retailer they’re shopping with within three minutes or less.

Aistetic’s tool can reduce rates of return by up to 30%, creating significant savings for retailers and promoting a more responsible approach to clothing.