HumanFirst

120 posts

HumanFirst banner
HumanFirst

HumanFirst

@HumanFirst_ai

The Hub for Conversational AI Data.

Montréal, Québec Bergabung Mayıs 2019
545 Mengikuti295 Pengikut
HumanFirst
HumanFirst@HumanFirst_ai·
Language Model Cascading & Probabilistic Programming Language The term Language Model Cascading (LMC) was coined in July 2022, which seems like a lifetime ago considering the speed at which the LLM narrative arc develops… Read more here: humanfirst.ai/blog/language-…
HumanFirst tweet media
English
0
1
2
156
HumanFirst
HumanFirst@HumanFirst_ai·
ICYMI - Announcement: A Powerful Partnership: HumanFirst Teams Up with Google Cloud to Boost Data Productivity, Custom AI Prompts and Models. Read more here: humanfirst.ai/blog/a-powerfu…
HumanFirst tweet media
English
0
0
0
57
HumanFirst
HumanFirst@HumanFirst_ai·
RT @CobusGreylingZA: SmartLLMChain is a LangChain implementation of the self-critique chain principle. It is useful for particularly comple…
English
0
1
0
0
HumanFirst me-retweet
@·
It does seem that the future will be one where Generative Apps will become more model (LLM) agnostic and model migration will take place; with models becoming a utility. Blue oceans are turning into red oceans very fast; and a myriad of applications and products are at threat
 tweet media
English
3
6
25
6.8K
HumanFirst me-retweet
@·
A recent study found that when LLMs are presented with longer input, LLM performance is best when relevant content is at the start or end of the input context. Performance degrades when relevant information is in the middle of long context. A few days ago Haystack by deepset released a component which optimises the layout of selected documents in the LLM context window. The component is a way to work around the problem identified in the paper. Why I particularly like this implementation from Haystack, is the fact that it's a good example of how innovation in the pre-LLM functionality, or the pipeline phase, can remedy inherent vulnerabilities of a LLM. Thank you @tuanacelik for telling me about this functionality and technical assistance. 🙂 @deepset_ai Read more here: cobusgreyling.medium.com/ahaystack-deve…
 tweet media
English
0
4
7
553
HumanFirst me-retweet
@·
Large Language Models (LLMs) are known to hallucinate. Hallucination is when a LLM generates a highly succinct and highly plausible answer; but factually incorrect. Hallucination can be negated by injecting prompts with contextually relevant data which the LLM can reference. Growing LLM context size has the allure that large swaths of contextual reference data can merely be submitted to the LLM to act as reference data. Reference data which will create a contextual reference for the LLM and in turn negate hallucination… A recent study found that LLMs perform better when the relevant information is located at the beginning or end of the input context. However, when relevant context is in the middle of longer contexts, the retrieval performance is degraded considerably. This is also the case for models specifically designed for long contexts. Read more in the article below. #LargeLanguageModels #PromptEngineering #LLMs cobusgreyling.medium.com/does-submittin…
 tweet media
English
1
1
7
490
HumanFirst me-retweet
@·
This article considers how Ragas can be combined with LangSmith for more detailed insights into how Ragas goes about evaluating a RAG/LLM implementation. Currently Ragas makes use of OpenAI, but it would make sense for Ragas to become more LLM agnostic; And Ragas is based on
 tweet media
English
0
6
39
18.9K
HumanFirst
HumanFirst@HumanFirst_ai·
In this article I consider the growing context of various Large Language Models (LLMs) to what extent it can be used and how a principle like RAG applies. Read more here: humanfirst.ai/blog/rag-llm-c…
HumanFirst tweet media
English
0
1
1
98
HumanFirst me-retweet
@·
Steps In Evaluating Retrieval Augmented Generation (RAG) Pipelines - The basic principle of RAG is to leverage external data sources. For each user query or question, a contextual chunk of text is retrieved to inject into the prompt. This chunk of text is retrieved based on its
 tweet media
English
1
2
5
437
HumanFirst me-retweet
@·
How to Mitigate LLM Hallucination and Single LLM Vendor Dependancy (Link to the full article in the comments) Four years ago I wrote about the importance of context when developing a chatbot. Context is more relevant now with LLMs than ever before. Injecting prompt with a
 tweet media
English
1
1
2
331
HumanFirst me-retweet
@·
The graph below graphically illustrates how the accuracy improves at the beginning and end of the information entered. And the performance deprecation when referencing data in the middle is also visible.
 tweet media
English
0
1
1
69