HumanFirst

120 posts

HumanFirst banner
HumanFirst

HumanFirst

@HumanFirst_ai

The Hub for Conversational AI Data.

Montréal, Québec Присоединился Mayıs 2019
545 Подписки295 Подписчики
HumanFirst
HumanFirst@HumanFirst_ai·
Our CEO, @paisible, joined @usernews with @PublicationsTr to talk about using data and prompt engineering to prioritize AI investments based on ground-truth customer insights. ✅ The full episode is available here: bit.ly/3tJfWyh
English
0
0
1
195
HumanFirst
HumanFirst@HumanFirst_ai·
Language Model Cascading & Probabilistic Programming Language The term Language Model Cascading (LMC) was coined in July 2022, which seems like a lifetime ago considering the speed at which the LLM narrative arc develops… Read more here: humanfirst.ai/blog/language-…
HumanFirst tweet media
English
0
1
2
156
HumanFirst
HumanFirst@HumanFirst_ai·
ICYMI - Announcement: A Powerful Partnership: HumanFirst Teams Up with Google Cloud to Boost Data Productivity, Custom AI Prompts and Models. Read more here: humanfirst.ai/blog/a-powerfu…
HumanFirst tweet media
English
0
0
0
57
HumanFirst
HumanFirst@HumanFirst_ai·
RT @CobusGreylingZA: SmartLLMChain is a LangChain implementation of the self-critique chain principle. It is useful for particularly comple…
English
0
1
0
0
HumanFirst ретвитнул
Cobus Greyling
Cobus Greyling@CobusGreylingZA·
It does seem that the future will be one where Generative Apps will become more model (LLM) agnostic and model migration will take place; with models becoming a utility. Blue oceans are turning into red oceans very fast; and a myriad of applications and products are at threat
Cobus Greyling tweet media
English
3
6
25
6.8K
HumanFirst ретвитнул
Cobus Greyling
Cobus Greyling@CobusGreylingZA·
A recent study found that when LLMs are presented with longer input, LLM performance is best when relevant content is at the start or end of the input context. Performance degrades when relevant information is in the middle of long context. A few days ago Haystack by deepset
Cobus Greyling tweet media
English
0
4
7
553
HumanFirst ретвитнул
Cobus Greyling
Cobus Greyling@CobusGreylingZA·
Large Language Models (LLMs) are known to hallucinate. Hallucination is when a LLM generates a highly succinct and highly plausible answer; but factually incorrect. Hallucination can be negated by injecting prompts with contextually relevant data which the LLM can reference.
Cobus Greyling tweet media
English
1
1
7
490
HumanFirst ретвитнул
Cobus Greyling
Cobus Greyling@CobusGreylingZA·
This article considers how Ragas can be combined with LangSmith for more detailed insights into how Ragas goes about evaluating a RAG/LLM implementation. Currently Ragas makes use of OpenAI, but it would make sense for Ragas to become more LLM agnostic; And Ragas is based on
Cobus Greyling tweet media
English
0
6
39
18.9K
HumanFirst
HumanFirst@HumanFirst_ai·
In this article I consider the growing context of various Large Language Models (LLMs) to what extent it can be used and how a principle like RAG applies. Read more here: humanfirst.ai/blog/rag-llm-c…
HumanFirst tweet media
English
0
1
1
98
HumanFirst ретвитнул
Cobus Greyling
Cobus Greyling@CobusGreylingZA·
Steps In Evaluating Retrieval Augmented Generation (RAG) Pipelines - The basic principle of RAG is to leverage external data sources. For each user query or question, a contextual chunk of text is retrieved to inject into the prompt. This chunk of text is retrieved based on its semantic similarity with the user question. But how can a RAG implementation be tested and benchmarked over time? Input: Question:  - These are the questions the RAG pipeline will be evaluated on. Answer:  - The answer generated from the RAG pipeline which will be presented to the user. Contexts:  - The contexts which will be passed into the LLM to answer the question. Ground Truths:  - The ground truth answer to the questions. Output: Faithfulness, Answer Relevancy, Context Relevancy and Context Recall. Link to the full article: @cobusgreyling/steps-in-evaluating-retrieval-augmented-generation-rag-pipelines-7d4b393e62b3" target="_blank" rel="nofollow noopener">medium.com/@cobusgreyling#LargeLanguageModels #PromptEngineering #RAG
Cobus Greyling tweet media
English
1
2
5
437
HumanFirst ретвитнул
Cobus Greyling
Cobus Greyling@CobusGreylingZA·
Cobus Greyling tweet media
ZXX
0
1
1
71
HumanFirst ретвитнул
Cobus Greyling
Cobus Greyling@CobusGreylingZA·
How to Mitigate LLM Hallucination and Single LLM Vendor Dependancy (Link to the full article in the comments) Four years ago I wrote about the importance of context when developing a chatbot. Context is more relevant now with LLMs than ever before. Injecting prompt with a
Cobus Greyling tweet media
English
1
1
2
331
HumanFirst ретвитнул
Cobus Greyling
Cobus Greyling@CobusGreylingZA·
The graph below graphically illustrates how the accuracy improves at the beginning and end of the information entered. And the performance deprecation when referencing data in the middle is also visible.
Cobus Greyling tweet media
English
0
1
1
69