CTO@MN

119 posts

CTO@MN banner
CTO@MN

CTO@MN

@CTOMn1

CTO @ Meganexus. A small business making a big impact on people’s lives Opinions my own

Out and about Katılım Şubat 2021
308 Takip Edilen54 Takipçiler
CTO@MN
CTO@MN@CTOMn1·
@EE im not a customer if I want to become a customer I would like the 1.6gb full fibre however as im an existing BT customer this offer doesn't not appear to be available and you ant to charge me more money for less speed
English
1
0
1
40
EE
EE@EE·
@CTOMn1 Hello. If you would like to discuss upgrading and there's no existing deals via your online account, you can speak with the team on 150. They'll have a look from there.
English
1
0
0
40
CTO@MN
CTO@MN@CTOMn1·
@EE any reason why if I wasn’t a by customer I can can 1.6gb full fibre for £61 but because I’m a by customer this offer doesn’t exist and is £90 for 900mb Feels slightly unfair
English
1
0
0
64
Yuchen Jin
Yuchen Jin@Yuchenj_UW·
we launched Hyperbolic on-demand GPU cloud last week it has now gone from $0 to $1 million ARR in just 7 days! not much marketing, just 1 tweet tell me what you're building, and I'll spot you free credits for an 8xH100 node for at least a few hours to start.
Yuchen Jin tweet media
English
134
34
816
127.8K
Cameron Stow
Cameron Stow@camerontstow·
Comet is the ultimate personal knowledge layer. It completely eliminates the need to open up PKM tools like Roam or Obsidian. It's the most efficient and frictionless way to find notes, tabs, highlights, saved content, etc. Completely redefines my idea of a "second brain"
English
80
11
553
229.5K
fks
fks@FredKSchott·
For the past few months, I've been secretly working on a new email client. I think it's finally ready to share with the world...
fks tweet media
English
135
24
832
136.5K
CTO@MN
CTO@MN@CTOMn1·
@DanCalle @karpathy Snap But would be incredibly useful if the tools did some of the heavy lifting and allowed me to set rules If I don’t respond in x time move to project x If I engage y times in a particular period move to project y etc Simplifies the experience for an old man
English
0
0
0
208
Dan Calle
Dan Calle@DanCalle·
I manage hundreds (thousands?) of conversations that fall into four groups: 1) long-running, bookmarked - basically my staff 3 examples: - I have an AI personal trainer/nutritionist I always return to for training/nutrition questions. - I have one conversation that helped me build my current home Linux box, and I return to it for any HW/OS/SW questions related to it. - I have several AI professors I use to learn various subjects - one per subject 2) useful, may return to, but not necessarily examples: - I saw nice sweet potatoes at the grocery store - asked about sweet potato soup, made soup. A week later, I saw a nice pumpkin - wanted to make a similar soup. Remember that convo, which already knows my equipment and preferences, returned to that conversation for a different soup. - in general, if I think I've asked a question before, and the context from before will save me some time now, I use search to look at previous conversations, and might continue one of them rather than start a new one 3) One-off questions: I usually ask them in a fresh conversation 4) Truly throwaway questions. Not only do I start a fresh conversation, but I will usually archive/delete it when I'm done. This is when the subject is pretty trivial, and I view it as clutter. Special case for some long-running conversations: I have also noticed that sometimes overly long context can start to produce weird effects. The LLM starts to hallucinate more, is less reliable about remembering details, and so on. In situations like this I sometimes ask it to generate a detailed summary of everything we have been working on, and I may ask follow-up questions, and then I paste the results into a new conversation and continue from there.
English
26
22
748
93.8K
Andrej Karpathy
Andrej Karpathy@karpathy·
When working with LLMs I am used to starting "New Conversation" for each request. But there is also the polar opposite approach of keeping one giant conversation going forever. The standard approach can still choose to use a Memory tool to write things down in between conversations (e.g. ChatGPT does so), so the "One Thread" approach can be seen as the extreme special case of using memory always and for everything. The other day I've come across someone saying that their conversation with Grok (which was free to them at the time) has now grown way too long for them to switch to ChatGPT. i.e. it functions like a moat hah. LLMs are rapidly growing in the allowed maximum context length *in principle*, and it's clear that this might allow the LLM to have a lot more context and knowledge of you, but there are some caveats. Few of the major ones as an example: - Speed. A giant context window will cost more compute and will be slower. - Ability. Just because you can feed in all those tokens doesn't mean that they can also be manipulated effectively by the LLM's attention and its in-context-learning mechanism for problem solving (the simplest demonstration is the "needle in the haystack" eval). - Signal to noise. Too many tokens fighting for attention may *decrease* performance due to being too "distracting", diffusing attention too broadly and decreasing a signal to noise ratio in the features. - Data; i.e. train - test data mismatch. Most of the training data in the finetuning conversation is likely ~short. Indeed, a large fraction of it in academic datasets is often single-turn (one single question -> answer). One giant conversation forces the LLM into a new data distribution it hasn't seen that much of during training. This is in large part because... - Data labeling. Keep in mind that LLMs still primarily and quite fundamentally rely on human supervision. A human labeler (or an engineer) can understand a short conversation and write optimal responses or rank them, or inspect whether an LLM judge is getting things right. But things grind to a halt with giant conversations. Who is supposed to write or inspect an alleged "optimal response" for a conversation of a few hundred thousand tokens? Certainly, it's not clear if an LLM should have a "New Conversation" button at all in the long run. It feels a bit like an internal implementation detail that is surfaced to the user for developer convenience and for the time being. And that the right solution is a very well-implemented memory feature, along the lines of active, agentic context management. Something I haven't really seen at all so far. Anyway curious to poll if people have tried One Thread and what the word is.
English
666
550
6.6K
829.9K
CTO@MN
CTO@MN@CTOMn1·
youtu.be/PDIeZ9eouSc?fe… The band and the show Given my current location Eton Rifles would be more appropriate but I just love this song RIP Rick & thanks for all the memories #asaturdaykid
YouTube video
YouTube
English
0
0
1
163
CTO@MN
CTO@MN@CTOMn1·
@timothy_barnes @Apple Only that a large majority on this platform will have no idea what the Apple 1984 ad is :-)
English
1
0
0
153
Adam.GPT
Adam.GPT@TheRealAdamG·
openai.com/index/introduc… “We’re announcing data residency in Europe for ChatGPT Enterprise, ChatGPT Edu, and the API Platform. This helps organizations operating in Europe meet local data sovereignty requirements when using OpenAI products in their businesses and building new solutions with AI.”
English
13
11
123
5.6K
CTO@MN
CTO@MN@CTOMn1·
@bevel_health No payment option on iPhone app - am I missing something Only got one week free trial
English
1
0
0
745
Bevel
Bevel@bevel_health·
Start 2025 strong with 2 months free of Bevel Pro 💪 Here’s how to claim your offer: 1️⃣ Scroll down on the payment screen 2️⃣ Tap “Redeem code" 3️⃣ Enter "2025" This offer is available to new and returning subscribers for a limited time while supplies last! Don’t miss out.
Bevel tweet media
English
5
0
11
3.9K
CTO@MN
CTO@MN@CTOMn1·
@mblukac Same from me Have an interesting justice use case If you would be so kind
English
1
0
1
29
Martin B. Lukac
Martin B. Lukac@mblukac·
Your voice reveals more about you than you might think. AI can now decode your personality from a simple conversation. 🌟 Thrilled to share my latest research in Scientific Reports (Nature).
Martin B. Lukac tweet media
English
4
4
28
12K
Stavros Kassinos
Stavros Kassinos@KassinosS·
Wrote a script to tokenize & segment long text into chunks, generate chunk audio using f5_tts_mlx by @lllucas & stitch together. Voice was cloned privately on an M2 Ultra but altered for privacy. The 2:40 long clip generated in 6 min using MLX. So fitting that 🚀 MLX @awnihannun was used to generate the audio for the paper. Happy to share the script. This allows to generate audio for complete docs.
English
5
1
21
1.8K
Tech In Schools Initiative
Comment below for a chance to win free credits on @pixio_ai! 🎨✨ Join our giveaway and unleash your creativity with AI-powered video generation. Don't miss out!
Tech In Schools Initiative tweet media
English
10
5
10
953
Anil Chandra Naidu Matcha
Anil Chandra Naidu Matcha@matchaman11·
Notebook is crazy 🔥🔥🔥 But but but ..... No multi-lingual support 👎 No custom voices 👎 Introducing Vadoo AI 🤯🤯 Now turn your PDF to podcast in 32+ languages Tons of voice customisation including Voice cloning Interested in getting access ? Comment "Interested" below
English
7
3
13
1.6K
CTO@MN
CTO@MN@CTOMn1·
@BenjaminDEKR @Digen_AI In a world where AI can sway, For social good, it paves the way. With avatars so keen, We'll make a difference seen, Let's use these invites today!
English
0
0
0
17
Benjamin De Kraker
Benjamin De Kraker@BenjaminDEKR·
I have 50 invitation links for @Digen_AI AI video avatar / lipsync platform with 1500 free credits each. If you want an invite, comment here with one sentence of why you need 'em. Bonus points if it's a limerick.
English
57
1
37
5.6K
CTO@MN retweetledi
Desmond
Desmond@desmondhth·
I created a beast 🤯 Your product URL → an AI-generated 15-page marketing plan PDF with: - User persona analysis - Funnel analysis - Best fit channel & step-by-step execution guide (*5) - ROI projection How wants to beta test it? 👀 Drop your URL!
English
465
120
867
175K