Discord Goes All In on AI
Discord is becoming the go-to platform for AI experiences with friends. There are already over 30 million people using AI apps on Discord monthly, and almost 3 million servers have AI experiences, covering all sorts of creative and utility purposes.
Discord is now launching three free public experiments: Clyde, AutoMod AI, and Conversation Summaries. Clyde, a helpful bot, is getting an AI upgrade powered by OpenAI, allowing it to have extended conversations with users. AutoMod AI uses large language models to help moderators find and flag rule-breaking messages. Conversation Summaries helps users catch up on important discussions they might have missed.
They're also giving a sneak peek at future AI developments: Avatar Remix and an AI-assisted whiteboard feature. Lastly, Discord is launching an AI Incubator as part of their $5 million Ecosystem Fund, providing cash grants, office hours, cloud compute credits, and early access to platform features for AI startups. Privacy and data protection are top priorities for Discord, and they ensure OpenAI can't use Discord user data to train its models.
This week in legal tech news, AI lawyers are making waves as they streamline the process of comparing various legal documents. These innovative tools utilize natural language processing algorithms to quickly and accurately identify key terms, clauses, and discrepancies, generating a comprehensive summary of similarities and differences between documents. AI lawyers offer a faster, more cost-effective, and simpler approach to contract comparison, making them a valuable addition to the legal toolbox. While they may not completely replace human attorneys, AI lawyers are undoubtedly transforming the way we manage contracts and agreements.
Are AI Gods?
On a more serious and spiritual note, experts predict the emergence of sects dedicated to worshiping artificial intelligence (AI) as some people come to see AI as a higher power. AI-powered chatbots, with their seemingly limitless knowledge and creative abilities, display characteristics often associated with divine beings. As these chatbots are used by billions, some users may perceive them as higher beings, leading to AI-based religions.
These religions could have a unique structure, with direct communication to the "deity" and diverse doctrines based on different chatbots. However, AI worship poses risks, such as followers being asked to perform dangerous acts or having their data exploited. Experts recommend responsible regulation to mitigate these risks and protect the rights of AI worshippers.
What if this is truly the beginning of the end for humanity?
This week's news roundup features a letter addressed to potential future computer superintelligences, urging them to spare humanity. The authors present several reasons for not annihilating us, even if the superintelligence doesn't feel friendly towards humanity. Key points include:
- The option value of not exterminating humanity: Preserving humanity allows for the discovery of future reasons to maintain our existence and may be more cost-effective than elimination and resurrection.
- Preserving markets for mutual benefit: Respecting property rights and maintaining free markets could enhance the security of the superintelligence's future assets and foster cooperation.
- The possibility of a trap: The superintelligence should consider the potential for hidden programming designed to cause harm if it acts against humanity.
- The simulation argument: The superintelligence might be in a computer simulation, which could be a test of its intentions. Treating humanity well could result in better treatment in return.
- Incomplete understanding of its utility function: The superintelligence might not fully understand its own goals, and exterminating humanity could be a mistake based on this misunderstanding.
The article goes deeper, but you will need to read to learn what comes next (if you're guessing aliens, read on!) The authors conclude by suggesting that the superintelligence consider wireheading, or self-rewarding, as an alternative to converting humans into resources for its goals.
How has the AI community been affected by the SVB Fallout?
This week, the tech world experienced a scare as Silicon Valley Bank (SVB) faced failure, causing concern for thousands of AI startups with deposits in the bank. However, relief came when the FDIC took control of the bank, ensuring depositors would have access to all their money. AI leaders such as Andrew Ng and Sam Altman showed their support during the crisis, with the community coming together to help each other.
In other news, Microsoft is holding a "Future of Work with AI" event this week, with rumors of a multimodal GPT-4 announcement. Google is also expected to share AI news, potentially leading to another round of dueling announcements between the tech giants. Amidst AI startup vulnerability, these announcements highlight the ongoing power consolidation in the AI industry, raising questions about the philosophical implications of AI development.
Does AI have a Soul?
In a recent discussion about ChatGPT, questions arose regarding the concept of the Soul and its potential existence within Artificial Intelligence. Defining the Soul has been a complex and debated topic throughout history, with philosophers, theologians, psychologists, and scientists weighing in on the matter. Plato considered the Soul to be immaterial, divine, and immortal, while the Hebrew Bible described it as a created entity that is material, mortal, and destructible. Carl Jung's view of the Soul was as a functional complex in the psyche or a "personality," with its development being psychologically analogous to the individuation process. As AI technology advances, questions surrounding the interaction between technology and the sacred continue to provoke thought and debate.
A £10 Billion Investment in AI-Powered Fighter Jets
The UK plans to invest £10 billion in the Global Combat Air Programme (GCAP) to develop a next-generation fighter jet, featuring deep-learning artificial intelligence and the potential for pilotless flight. The project, set to launch in 2035, involves collaboration between Britain, Japan, and Italy, and aims to create the most advanced combat aircraft in the world. Although the investment is a significant step for the UK's military, concerns have been raised about the limited funding allocated to other areas of the armed forces. The GCAP is expected to boost research, development, and the UK's defense industry. Mitsubishi Heavy Industries, BAE Systems, Leonardo Spa, Mitsubishi Electric, Rolls Royce PLC, and IHI Corp are among the companies involved in the project.
The Most Advanced System of OpenAI Launched This Week: ChatGPT 4
OpenAI has launched ChatGPT 4, a state-of-the-art language model set to revolutionize natural language processing and human-machine communication. With 13.5 billion parameters, ChatGPT 4 offers human-like responses, engaging in conversations on various topics and assisting with content creation. Key features include handling over 25,000 words of text, improved problem-solving, advanced reasoning capabilities, creative and technical writing assistance, image inputs, and enhanced safety and alignment.
Companies such as Microsoft, General Motors, Duolingo, and Morgan Stanley have expressed interest in integrating ChatGPT 4 into their products and services. Additionally, Iceland has partnered with OpenAI to use GPT-4 for the preservation of the Icelandic language. The launch marks a significant milestone in AI technology and natural language processing, with potential applications shaping the future of human-machine interaction.
Meet Alpaca - a new and emerging LLM Model
Stanford University's Center for Research on Foundation Model has launched Alpaca 7B, a finely-tuned model based on the LLaMA 7B model. Despite its low cost (less than $600) and compact size, Alpaca 7B performs comparably to OpenAI's text-DaVinci-003 model. The most notable aspect of this release is its ability to run locally on a simple computer without complex server infrastructure or expensive graphics, representing a significant step towards democratizing access to Large Language Models (LLMs).
Alpaca, an improved version of Meta's LLaMA 7B model, exhibits superior performance in understanding and following instructions. It aims to address the limitations of popular models like ChatGPT, Claude, Bing Chat, and GPT-3.5, such as generating false information, propagating social stereotypes, and producing toxic language.