Joe de Moraes

104 posts

Joe de Moraes banner
Joe de Moraes

Joe de Moraes

@joedemoraes

Master Your Mindset, Transform Your Software Engineering Career https://t.co/Xk6YtaZGXe

Katılım Nisan 2009
44 Takip Edilen130 Takipçiler
Joe de Moraes
Joe de Moraes@joedemoraes·
High Agency in Software Engineering High agency is essentially the art of overcoming obstacles. High agency is also the common denominator of high-performing software engineers.
Joe de Moraes tweet media
English
1
0
0
24
Joe de Moraes retweetledi
George Mack
George Mack@george__mack·
1/ HIGH AGENCY Once you SEE it - you can never UNSEE it. Arguedbly the most important personality trait you can foster. I've thought about this concept every week for the last two years since I heard @ericweinstein discuss it on @tferriss' podcast. THREAD...
English
181
1.7K
5.4K
0
Joe de Moraes
Joe de Moraes@joedemoraes·
@sebastianvolkis I have been using mostly GPT-4 and is fine I’d say. But it’s quite pricey. I’m wondering if you are doing some kind of caching for you chatIQ ?
English
0
0
0
14
Sebastian Volkis
Sebastian Volkis@sebastianvolkis·
Anyone else using AssistantsAPI notice even on GPT3.5 turbo its signifficantly slower? Im noticing better responses but its a bit of a painful wait sometimes.
English
2
0
3
881
Joe de Moraes
Joe de Moraes@joedemoraes·
@ankurkumarz Are you using GPTCache in production? What is your experience with it ? :)
English
1
0
0
11
Ankur Kumar
Ankur Kumar@ankurkumarz·
While these have been suggested as Assess, considering the Gen AI trends these should be Adopt 👇 🔹 LangChain 🔹 LlamaIndex 🔹GPTCache What do you think? thoughtworks.com/en-us/radar/la…
Ankur Kumar tweet media
English
2
0
2
129
Joe de Moraes
Joe de Moraes@joedemoraes·
@zilliz_universe I’m wondering if there are many developers using this in production. The usecase page of GPTCache doesn’t have much. Do you think that false positives during cache hit is the main blocker for people to adopt this ?
English
0
0
0
10
Zilliz
Zilliz@zilliz_universe·
Using a semantic cache like GPT Cache will improve LLM performance and reduce costs. Learn more about it 🔗 bit.ly/3RUF4KS #Zilliz #LLM #MachineLearning
Rohan Paul@rohanpaul_ai

Massive cost saving (by 50% or more) of your OpenAI API / ChatGPT API call by using caching with GPTCache 🚀 🟠 Also much faster response times 🟠 Overcome the rate limits restrictions and 🟠 Greatly enhance the scalability of your application, by reducing the load on the LLM service. ---------- 🤔 The Problem it solves using an exact match approach for LLM caches is less effective due to the complexity and variability of LLM queries, resulting in a low cache hit rate. ---------- 🤔 How does it work? To address this issue, GPTCache adopt alternative strategies like semantic caching. Semantic caching identifies and stores similar or related queries, thereby increasing cache hit probability and enhancing overall caching efficiency. GPTCache employs embedding algorithms to convert queries into embeddings and uses a vector store for similarity search on these embeddings. This process allows GPTCache to identify and retrieve similar or related queries from the cache storage. Users can customize their own semantic cache, and and can even develop their own implementations to suit their specific needs. ---------- GPTCache offers three metrics to gauge its performance, which are helpful for developers to optimize their caching systems: 📌 Hit Ratio: This metric quantifies the cache's ability to fulfill content requests successfully, compared to the total number of requests it receives. A higher hit ratio indicates a more effective cache. 📌 Latency: This metric measures the time it takes for a query to be processed and the corresponding data to be retrieved from the cache. Lower latency signifies a more efficient and responsive caching system. 📌 Recall: This metric represents the proportion of queries served by the cache out of the total number of queries that should have been served by the cache. Higher recall percentages indicate that the cache is effectively serving the appropriate content.

English
1
1
2
354
Joe de Moraes
Joe de Moraes@joedemoraes·
@rohanpaul_ai Are you using LLMCache in real products? Or know about it ? I’m trying to collect use cases.
English
0
0
0
15
Rohan Paul
Rohan Paul@rohanpaul_ai·
Massive cost saving (by 50% or more) of your OpenAI API / ChatGPT API call by using caching with GPTCache 🚀 🟠 Also much faster response times 🟠 Overcome the rate limits restrictions and 🟠 Greatly enhance the scalability of your application, by reducing the load on the LLM service. ---------- 🤔 The Problem it solves using an exact match approach for LLM caches is less effective due to the complexity and variability of LLM queries, resulting in a low cache hit rate. ---------- 🤔 How does it work? To address this issue, GPTCache adopt alternative strategies like semantic caching. Semantic caching identifies and stores similar or related queries, thereby increasing cache hit probability and enhancing overall caching efficiency. GPTCache employs embedding algorithms to convert queries into embeddings and uses a vector store for similarity search on these embeddings. This process allows GPTCache to identify and retrieve similar or related queries from the cache storage. Users can customize their own semantic cache, and and can even develop their own implementations to suit their specific needs. ---------- GPTCache offers three metrics to gauge its performance, which are helpful for developers to optimize their caching systems: 📌 Hit Ratio: This metric quantifies the cache's ability to fulfill content requests successfully, compared to the total number of requests it receives. A higher hit ratio indicates a more effective cache. 📌 Latency: This metric measures the time it takes for a query to be processed and the corresponding data to be retrieved from the cache. Lower latency signifies a more efficient and responsive caching system. 📌 Recall: This metric represents the proportion of queries served by the cache out of the total number of queries that should have been served by the cache. Higher recall percentages indicate that the cache is effectively serving the appropriate content.
Rohan Paul tweet media
English
4
2
15
1.4K
Joe de Moraes
Joe de Moraes@joedemoraes·
@thepatwalls Do you know how this model is common across other fields (not law only)?
Berlin, Germany 🇩🇪 English
1
0
0
1.5K
Pat Walls
Pat Walls@thepatwalls·
Wild $8.4M/year SEO business: - Problem: Lawyers suck at SEO - Solution: Rank local personal injury firms #1 on Google - Massive niche: 400k personal injury cases/year @ ~$31K/case - Firms gets 37% cut (big $$, high cost per lead) - Pricing: Base package starts at $10,000/month - Average package is $14,500/month !!! - They nearly DOUBLED every year for 8 years (94.77% CAGR) - 8x higher revenue than avg successful agency (based on SS data) - Growth: They took the same formula to every city / state in US rankings.io
Pat Walls tweet mediaPat Walls tweet mediaPat Walls tweet mediaPat Walls tweet media
English
10
17
267
53.8K
Wilmer Terrero
Wilmer Terrero@wilterrero·
Everyone says: Forget B2C and go all with B2B 🤑 But I can't stop getting B2C product ideas... Does that happen to you?
English
5
1
13
937
Joe de Moraes
Joe de Moraes@joedemoraes·
@levelsio What does it keep you from doing it ? Honest question. Wondering which part of your business you cannot automate or delegate or pause.
English
0
0
0
117
Joe de Moraes
Joe de Moraes@joedemoraes·
@asmartbear Make sense! Did you love WordPress? Or maybe that technical scalability challenges that you had to solve back then?
Berlin, Germany 🇩🇪 English
1
0
1
29
Jason Cohen
Jason Cohen@asmartbear·
As much as I’m a proponent of “doing what you love,” you also can come to love things that you’re good at, and that’s generating results you want (like a successful company). I liked WordPress, and then I LOVED WordPress. Strengths, though, are non-negotiable.
English
11
2
54
4.7K
Javi Lopez ⛩️
Javi Lopez ⛩️@javilopen·
⚡ Second World War photos restoration with AI? A lot of people have asked me if Magnific can be used for old photo restoration. The truth is that we didn't design it with that use case in mind... But a lot of people are using it anyway! 🤷 The big problem are faces: it tends to change them. However, the amount of detail it captures in other areas is astonishing. We have some ideas to improve this.
English
261
120
1.2K
682.2K
Joe de Moraes
Joe de Moraes@joedemoraes·
@MichaelKochDev It’s a good starting point. But it doesn’t mean you have a business model. How many like you are out there ? Are they willing to pay for it ?
English
1
0
1
17
Michael Koch
Michael Koch@MichaelKochDev·
if you're building a product that you would actually use, why should you care what your customers think? You should know what your product needs
English
6
0
10
451