- How To Actually AI
- Posts
- 🤖 AI Power Players: How Salesforce and Instacart are Changing The Game
🤖 AI Power Players: How Salesforce and Instacart are Changing The Game
PLUS: EU's Stricter Rules for AI Models
Happy Friday and welcome back to How to Actually AI.
Here is what we got for you today:
📰 TOP PICKS: Open AI’s valuation at $86 Billion
🚀 GROWTH: AI’s Social and Ethical Implications
💾 QUICKBYTES: News roundup of everything AI
📺️ WORTH A WATCH: KING of AI Images
Read time: 3 minutes.
TOP PICKS
Both Salesforce and Instacart are charting new territories with AI at the helm. Salesforce's AI tool, Einstein, is now processing over 80 billion predictions per day, guiding decisions in sales, service, and marketing. On the other hand, Instacart is leveraging AI to optimize its retail platform, resulting in a 50% reduction in out-of-stock items.
In a recent round of talks about selling shares, OpenAI, one of the leading artificial intelligence research labs, is reportedly being valued at a whopping $86 billion. This is a testament to the growing significance and potential of AI in reshaping our world.
In an attempt to regulate the rapidly evolving AI landscape, the European Union is planning to introduce stricter rules for the most powerful generative AI models. This move is expected to have far-reaching implications for the AI industry, emphasizing the importance of ethical considerations and accountability in AI technology.
GROWTH
Harnessing AI: Decoding the Social and Ethical Implications
We often marvel at the sheer power and potential of artificial intelligence. It's fascinating, it's transformative, it's...a little intimidating?
Indeed, the implications of AI are not just technological, but deeply social and ethical too. This is precisely what the experts over at DeepMind delve into in their latest article. So, let's break it down.
DeepMind, renowned for their cutting-edge AI research, highlights the growing significance of generative AI models. These are the AI systems that can create new content, be it text, image, or even music. As exhilarating as it is to witness AI-generated art or literature, this power also poses potential risks.
In their post, DeepMind discusses the potential misuse of these generative models. They also underline the importance of conducting proactive and ongoing risk evaluations to keep any negative impacts in check.
One of the key takeaways from their article is the creation of the RED Team (Risk Evaluation and Deployment). This team at DeepMind is dedicated to evaluating the risks of AI and ML deployments comprehensively.
More importantly, they are moving towards more transparency. They're working on sharing as much of their safety and policy research as possible, allowing the broader AI community to scrutinize, learn, and contribute.
DeepMind's commitment to tackling the complex ethical landscape of AI is commendable. As AI continues to evolve and seep into every facet of our lives, initiatives like these will play a pivotal role in ensuring we navigate this brave new world safely and responsibly.
🔎 To get the full scoop, check out the full article on DeepMind's blog.