Kern AI

204 posts

Kern AI banner
Kern AI

Kern AI

@MeetKern

Generative AI assistants you can trust. Select AI models trained on refined data for secure, seamless human-AI collaboration.

Potsdam, Germany Katılım Ağustos 2021
88 Takip Edilen812 Takipçiler
Sabitlenmiş Tweet
Kern AI
Kern AI@MeetKern·
If you’re not using custom chatbots for the documentation of your product, you are definitely missing out on one of the best use cases for #LLMs. Check out our latest video to see how it's done in under 20 minutes. Theory and code included. youtu.be/URYPsgVGDRc
YouTube video
YouTube
English
0
2
1
474
Kern AI retweetledi
Tilores
Tilores@TiloresHQ·
Curious about how to use Entity RAG alongside vector databases to build highly accurate and trustworthy Large Language Models? Check this webinar we recorded recently with @MeetKern youtu.be/QUdIpXUxaAc?si…
YouTube video
YouTube
English
0
2
2
287
Kern AI
Kern AI@MeetKern·
In our latest blog post: "Data-Centric RAG" discover how a data-centric RAG approach is setting new standards for AI accuracy. ✅ Understand the data-centric RAG approach ✅ Learn how it reduces hallucinations ✅ Real-world example with ChatGPT eu1.hubs.ly/H08mL-T0
Kern AI tweet media
English
0
0
2
118
Kern AI retweetledi
Steven Renwick
Steven Renwick@Major_Grooves·
I will be talking about Entity RAG for LLMs in regulated environments together with @MeetKern on 23rd April. Want to use LLMs in a regulated environment? You need a trusted source of truth. We will build a chat bot live in the demo and show it working +/- RAG. 🤖 Link below. 👇
English
1
2
2
239
Kern AI retweetledi
Tilores
Tilores@TiloresHQ·
LLMs are a hot topic but they can be dangerous - especially in regulated environments if they are not based on a trusted source of truth. Join us and @MeetKern in a webinar on 23rd April where they will discuss how to make reliable, accurate LLMs based on Entity RAG. 👇
English
1
1
1
288
Kern AI
Kern AI@MeetKern·
Transform #customersupport with Kern AI! 🚀 Instantly access company knowledge, boost your team's performance, and deliver superior customer experiences. Our #AI, built on a data-centric RAG, provides accurate, reliable help. Learn more here eu1.hubs.ly/H08mFVP0
Kern AI tweet media
English
0
0
0
60
Kern AI retweetledi
AI at Meta
AI at Meta@AIatMeta·
Today we’re releasing Code Llama, a large language model built on top of Llama 2, fine-tuned for coding & state-of-the-art for publicly available coding tools. Keeping with our open approach, Code Llama is publicly-available now for both research & commercial use. More ⬇️
English
166
901
3.4K
1M
Kern AI
Kern AI@MeetKern·
Excellent comprehensive work that covers LLM security. For each security issue, they address examples, prevention methods, and attack scenarios. Check it out, there even is a shorter version as slides!
Itamar Golan 🤓@ItakGol

We have finally made it! 🎉 I am both thrilled and humbled to announce the official launch of the OWASP Top 10 for Large Language Model Applications version 1.0! It is the first comprehensive, industry-standard reference for security vulnerabilities in applications using Large Language Models (LLMs). This marks a significant milestone in enabling the widespread safe and secure use of LLMs in production. 🎯 Explore our work - owasp.org/www-project-to… Chat with me on LLM Security - calendly.com/golan-itamar/l… (And I'm also in Blackhat if you are interested). 💬 It has been a real honor to take a small part in this initiative led by the amazing Steve Wilson!

English
0
0
1
121
Kern AI retweetledi
Riley Goodside
Riley Goodside@goodside·
this is wild — kNN using a gzip-based distance metric outperforms BERT and other neural methods for OOD sentence classification intuition: 2 texts similar if cat-ing one to the other barely increases gzip size no training, no tuning, no params — this is the entire algorithm:
Riley Goodside tweet media
Luke Gessler@LukeGessler

this paper's nuts. for sentence classification on out-of-domain datasets, all neural (Transformer or not) approaches lose to good old kNN on representations generated by.... gzip aclanthology.org/2023.findings-…

English
134
1.1K
7K
2.1M
Kern AI retweetledi
Jeremy Howard
Jeremy Howard@jeremyphoward·
For folks looking to learn NLP, the slides for Stanford's excellent cs224n courses are all available here (scroll to the bottom for the latest version): web.stanford.edu/class/cs224n/s…
English
15
226
1.2K
164.7K
Kern AI retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
Promising. Everyone should hope that we can throw away tokenization in LLMs. Doing so naively creates (byte-level) sequences that are too long, so the devil is in the details. Tokenization means that LLMs are not actually fully end-to-end. There is a whole separate stage with its own training and inference, and additional libraries. It complicates the ingest of additional modalities. Tokenization also has many subtle sharp edges. Few examples: That "trailing whitespace" error you've potentially seen in Playground? If you end your (text completion API) prompt with space you are surprisingly creating a big domain gap, a likely source of many bugs: blog.scottlogic.com/2021/08/31/a-p… Tokenization is why GPTs are bad at a number of very simple spelling / character manipulation tasks, e.g.: x.com/npew/status/15… Tokenization creates attack surfaces, e.g. SolidGoldMagikarp, where some tokens are much more common during the training of tokenizer than they are during the training of the GPT, feeding unoptimized activations into processing at test time: lesswrong.com/posts/aPeJE8bS… The list goes on, TLDR everyone should hope that tokenization could be thrown away. Maybe even more importantly, we may find general-purpose strategies for multi-scale training in the process.
AK@_akhaliq

MEGABYTE: Predicting Million-byte Sequences with Multiscale Transformers abs: arxiv.org/abs/2305.07185 paper page: huggingface.co/papers/2305.07…

English
85
576
3.8K
1.5M
Kern AI
Kern AI@MeetKern·
Event season is about to start! In April you will find us at these events: #FintechWorld (Berlin, 18th - 19th) #PyCon (Berlin, 19th) #LogiMAT (Stuttgart, 25th) #InsureNXT (Cologne, 26th - 27th) Come and say hi, we'll be well-equipped with some stickers 😉
Kern AI tweet media
English
0
1
4
240
Kern AI
Kern AI@MeetKern·
Even though Easter is over, we still have an 🥚-citing surprise for you. Did you know that there is a hidden game in our open-source tool 'refinery'? Find a place where you can execute Python code and see the magic happen! Try it out on demo.kern.ai :)
Kern AI tweet media
English
0
0
3
147
Kern AI retweetledi
Johannes Hötter
Johannes Hötter@johoetter·
With Twitter open-source, the issues of its repository have gone wild. Check out my blog post on this topic here: dev.to/meetkern/twitt…
English
0
2
6
718