generation_ai

25 posts

generation_ai banner
generation_ai

generation_ai

@_generation_ai

Unsere Meetup-Gruppe ist für alle gedacht, die sich ebenso leidenschaftlich für die faszinierende Welt der Generative AI interessieren wie wir.

Rhein-Neckar-Kreis & Karlsruhe Beigetreten Aralık 2023
25 Folgt9 Follower
generation_ai retweetet
Thinktecture
Thinktecture@thinktecture·
Wie kann ich Formulare mit KI vollautomatisch ausfüllen lassen? Unser #W3C-Member und #GDE Christian Liebel liefert in seinem kostenlosen Webinar die Antwort! thinktecture.com/webinare/angul…
Thinktecture tweet media
Deutsch
0
4
5
163
generation_ai
generation_ai@_generation_ai·
Here we go :) @jochenkluger ist gerade mit seinem Talk zu „AI in Unternehmen“ gestartet 🤙🏻
generation_ai tweet media
Deutsch
0
0
0
13
generation_ai
generation_ai@_generation_ai·
Here we go again :) Seid dabei, wenn wir LLMs mit euch auf den Prüfstand stellen 🤘🏻 Komm mit mir zu Generation AI - 02/24 - Hands-On #1 meetu.ps/e/MT0B9/yLFz2/i
Deutsch
0
1
1
19
generation_ai retweetet
Christian Weyer
Christian Weyer@christianweyer·
💡 In my very humble opinion, this is hands-down the *one* article that every software engineer should read to understand Large Language Models (LLMs). omrimallis.com/posts/understa… 👇🏼 "In this post, the author will delve into the inner workings of Large Language Models (LLMs) to provide a practical understanding of their functionality. The exploration will be facilitated through the use of llama.cpp, a pure C++ implementation of Meta’s LLaMA model, which the author finds to be an excellent resource for comprehending LLMs in depth. Its code is appreciated for being clean, concise, and straightforward, avoiding unnecessary abstractions. The specific commit version of this code will be used in the discussion. The focus will be on the inference aspect of LLMs, specifically how these trained models generate responses based on user prompts. [...] Throughout the post, the author will guide readers through the entire inference process, covering several key topics, which include: 1. Tensors: Providing a basic overview of how mathematical operations in LLMs are executed using tensors, potentially with GPU acceleration. 2. Tokenization: Explaining how user prompts are broken down into a list of tokens, which serve as the input for the LLM. 3. Embedding: Detailing the process of transforming these tokens into vector representations. 4. The Transformer: Focusing on the core part of the LLM architecture, particularly the self-attention mechanism, which is vital for inference. 5. Sampling: Discussing how the model selects the next predicted token, with an exploration of two different sampling techniques. 6. The KV Cache: Examining a common optimization strategy used to enhance inference speed in large prompts, including a basic kv cache implementation. By the end of this post, readers should gain a comprehensive understanding of how LLMs function, equipping them to delve into more advanced topics, some of which will be outlined in the final section." #LargeLanguageModels #LLMs #Transformer #Llama #llamacpp #SoftwareEngineering
English
1
4
6
437
generation_ai retweetet
Thinktecture
Thinktecture@thinktecture·
👋 Greetings from Germany! 🇩🇪 We're on the lookout for talented individuals to join our team—specializing in Angular and Angular + UX/UI design. Important: Proficiency in German and residency within the EU are essential to serve our customer base. 🚀 lnk.thinktecture.cloud/jobs_ng_ux 🚀 lnk.thinktecture.cloud/jobs_ng
English
1
8
10
489
generation_ai retweetet
Kenny Pflug
Kenny Pflug@feO2x·
@BASTAcon Besucht z.B. meine Talks zum sicheren Hosten von Azure OpenAI Service am 13.02. oder zu allen Internals von async-await in .NET am 14.02. basta.net/speaker/kenny-…
Deutsch
0
2
4
161