기본 콘텐츠로 건너뛰기

7월, 2024의 게시물 표시

Smaller, Safer, More Transparent: Advancing Responsible AI with Gemma

 Smaller, Safer, More Transparent: Advancing Responsible AI with Gemma In June, we  released Gemma 2 , our new best-in-class open models, in 27 billion (27B) and 9 billion (9B) parameter sizes. Since its debut, the 27B model quickly became one of the highest-ranking open models on the  LMSYS Chatbot Arena leaderboard , even outperforming popular models more than twice its size in real conversations. But Gemma is about more than just performance. It's built on a foundation of responsible AI, prioritizing safety and accessibility. To support this commitment, we are excited to announce three new additions to the Gemma 2 family: Gemma 2 2B  – a brand-new version of our popular 2 billion (2B) parameter model, featuring built-in safety advancements and a powerful balance of performance and efficiency. 2.  ShieldGemma  – a suite of safety content classifier models, built upon Gemma 2, to filter the input and outputs of AI models and keep the user safe. 3.  Ge...

Generative AI in subsurface science and policy

 Generative AI in subsurface science and policy Policy Lab has experimented with Artificial Intelligence (AI) in policy development with teams across government, and beyond, for a number of years. In 2019 we worked with the Department for Transport’s data science team to consider  the role that AI could play in improving the efficiency and effectiveness of the policy consultation process . In 2022 we used AI to create a vision for the future of Hounslow with the local authority. In 2023, we commissioned the creation of the  Ecological Intelligence Agency , a speculative artefact to help experience the role  AI might have in future decision-making in environmental policy .    This blog explains how Policy Lab used generative AI in policy relating to the future of the subsurface. Broadly speaking, generative AI can be understood as systems that create new data, which could be new code, text, images, video or other forms of data. We used generative AI to visua...

Meta moves on from its celebrity lookalike AI chatbots

 Meta moves on from its celebrity lookalike AI chatbots Meta has shut down its AI chatbots that let you have conversations with alter-ego versions of celebrities,  as reported by  The Information . The celebrity chatbots were a big part of Meta’s announcements at its  Connect event last September , now, you can’t talk with them anymore. The shutdown follows Meta’s Monday rollout of AI Studio, a tool that lets creators in the US  make AI chatbots of themselves . Based on a statement, the company seems to be favoring this direction rather than the more handcrafted celebrity bots. “You can no longer interact with AI characters embodied by celebrities,” Meta spokesperson Liz Sweeney says to  The Verge . “We took a lot of learnings from building them and Meta AI to understand how people can use AIs to connect and create in unique ways. AI Studio is an evolution, creating a space for anyone including people, creators and celebrities to create their own AI.” Meta’...

Enhancing LLM quality and interpretability with the Vertex AI Gen AI Evaluation Service

 Enhancing LLM quality and interpretability with the Vertex AI Gen AI Evaluation Service Developers harnessing the power of large language models (LLMs) often encounter two key hurdles: managing the inherent randomness of their output and addressing their occasional tendency to generate factually incorrect information. Somewhat like rolling dice, LLMs offer a touch of unpredictability, generating different responses even when given the same prompt. While this randomness can fuel creativity, it can also be a stumbling block when consistency or factual accuracy is crucial. Moreover, the occasional "hallucinations" – where the LLM confidently presents misinformation – can undermine trust in its capabilities. The challenge intensifies when we consider that many real-world tasks lack a single, definitive answer. Whether it's summarizing complex information, crafting compelling marketing copy, brainstorming innovative product ideas, or drafting persuasive emails, there's of...

Generative AI for Manufacturing

 Generative AI for Manufacturing This working group aims to explore the prospects of applying modern Data driven Generative AI methods in Manufacturing. Generative AI, refers to a class of AI systems and models designed to generate data that is similar to, or inspired by, existing data.  Even though this sort of AI approach showcases a great potential, its application to Manufacturing is still lagging for different reasons. A community effort will help to investigate the potential and to further eliminate the risks and further improve the understanding of Generative AI expected contributions for manufacturers. 

The EU Commission’s Draft AI Pact anticipating compliance with newly published AI Act

 The EU Commission’s Draft AI Pact anticipating compliance with newly published AI Act Shortly after the publication of the Artificial Intelligence (AI) Act, the EU Commission published the AI Pact’s draft commitments with a view of anticipating compliance with high-risk requirements for AI developers and deployers. Publication and timeline for the AI Act The EU AI Act was published in the Official Journal of the European Union on July 12, 2024, as “Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonized rules on artificial intelligence.”  We have presented the main provisions and purposes of the AI Act in our publication  here . The EU AI Act will enter into force across all 27 EU Member States on August 1, 2024, but has variable transition periods depending on the relevant parts of the AI Acts; starting with February 2, 2025, at which point, prohibited AI practices must be withdrawn from the market, and with the enf...