CoreLumen

303 posts

CoreLumen banner
CoreLumen

CoreLumen

@corelumen

Nicholas Blanchard - Designer, developer https://t.co/zatC4DbNDK

Shawnee, KS Sumali Nisan 2026
69 Sinusundan23 Mga Tagasunod
Naka-pin na Tweet
CoreLumen
CoreLumen@corelumen·
Founder of Corelumen: corelumen.io Check out my projects! Pathlight: AI agent stack trace. See what your agent is doing, debug, and fix all in one place. syndicalt.github.io/pathlight Eventloom: Immutable, traceable agent logging. Integrated with Pathlight for visualization. syndicalt.github.io ClearDay: A PCOD/PCOS habit tracker/coach. clearday.care Provara: Adaptive LLM routing gateway. Save money and avoid regression! provara.xyz Divita: Blogs to books/magazines, podcasts, reading circles. Where your words find voice. divita.app Coming soon: Specora: AI-native IT service management. If a human touches a ticket, something went wrong. Ampline: AI-native electrical industry estimating/small business management software.
English
0
0
0
70
CoreLumen
CoreLumen@corelumen·
specora-core works, though has a limited number of generators contract-driven development there are a few demos to show the range of apps that can currently be generated from contracts specora-core will spin up infra + a healer, monitor logs, and fix runtime errors code becomes an ephemeral artifact, contracts are the source of truth github.com/syndicalt/spec…
English
0
0
0
384
Gary Bernhardt
Gary Bernhardt@garybernhardt·
Are there any actual success stories for "software factories" like gastown? Anything I can see running, and preferably see code?
English
19
2
61
10.9K
Aariv Singh
Aariv Singh@aarivCodes·
morning guys! 😊 What are you building today?
Aariv Singh tweet media
English
17
2
22
209
Captain Insight
Captain Insight@CaptainInsightX·
AI Engineer Interview Question: User says “the AI is hallucinating” How do you even prove that?
English
32
2
38
868
CoreLumen
CoreLumen@corelumen·
Know when a provider silently ships a regression. Cut model spend at equal quality — automatically. Answer "why did our bill double?" in one screen, not a grep. Built for teams shipping AI-powered products who've outgrown raw API access. Implemented LLM firewall and currently implementing context optimization. provara.xyz
English
0
0
0
10
TheTechWorldPodcast
TheTechWorldPodcast@TheTechWorldPod·
Drop your product below 👇 Feedback day! I want to see what you're building. This counts as marketing. This brings traffic. This gets you followers. Last time seen by over 7800 people. Let's do it!
English
18
0
7
213
CoreLumen
CoreLumen@corelumen·
- context_retrieval_events migration/table - Retrieval summary and recent event APIs - POST /v1/context/optimize now records retrieval analytics - GUI now shows Retrieval Analytics and Retrieval Events on /dashboard/context - Demo seed, OpenAPI, docs, roadmap, changelog, and tests updated
English
1
0
0
18
CoreLumen
CoreLumen@corelumen·
Just starting to building the context optimizer for provara.xyz! Will be sharing progress in this thread. Targeting 3x token efficiency and 20-40% data size conservation.
English
1
0
0
58
Khairallah AL-Awady
Khairallah AL-Awady@eng_khairallah1·
This Chinese developer launched Llama 70B locally on a MacBook on a plane and for a full 11 hours without internet ran client projects. He was sitting by the window on a transatlantic flight with a MacBook Pro M4 with 64 GB of memory. WiFi on board cost $25 for the flight. He declined. No cloud API, no connection to Anthropic or OpenAI servers, no internet at all. Just a local Llama 3.3 70B on bf16 and his own orchestrator script. The model runs through llama.cpp. Generation speed, 71 tokens per second. Context around 60,000 tokens. Memory usage, 48.6 GiB out of 64. Battery at takeoff, 3 hours 21 minutes. And he gave the orchestrator this system prompt before takeoff: "You are an offline orchestrator running on a single MacBook. There is no network. The only resources you have are local files in /Users/dev/work, the Llama 70B inference server at localhost:8080, and a battery budget of 3 hours 21 minutes. Process the queue at /Users/dev/work/queue.jsonl (one client task per line). For each task: draft → run local evals → save artefact to /Users/dev/work/done/. Save context checkpoints every 12 tasks so you can resume after a battery swap. Stop only on empty queue or when battery drops below 5%." So the system knows exactly what resources it is running on. It knows it has no connection to the outside world for the next 11 hours. It knows it has finite memory and a finite battery. It knows the human will not intervene until the plane lands. The system runs in 1 loop. Takes a task from the queue, runs it through inference, saves the artifact, writes a checkpoint. Task after task, just like that. And only when the battery drops below 5% does the orchestrator automatically pause, waits for the laptop to switch to the backup power bank, and continues from the last checkpoint. Here is what the system actually writes in his log during the flight: "saved context checkpoint 8 of 12 (pos_min = 488, pos_max = 50118, size = 62.813 MiB)" "restored context checkpoint (pos_min = 488, pos_max = 50118)" "prompt processing progress: n_tokens = 50 / 60 818" "task 37016 done | tps = 71 s tokens text → /Users/dev/work/done/proposal_westside.md" Outside the window, clouds, blue sky, and no WiFi. On the tray, 1 MacBook, an open terminal on 2 screens, and an inference server on localhost. From what I have observed, this is the cleanest offline AI workflow I have seen in the past year: 11 hours of flight, $0 for WiFi, and the entire client queue closed before landing.
Khairallah AL-Awady@eng_khairallah1

x.com/i/article/2049…

English
57
80
547
103.8K
Rayane
Rayane@FlippedRay·
drop ur startup link
English
38
0
12
565
CoreLumen
CoreLumen@corelumen·
@Its_Nova1012 DOS, then OS/2 Warp, then Windows 3.1-Windows 10, now Pop+KDE Plasma
English
0
0
1
52
NOVA
NOVA@Its_Nova1012·
What was the first Operating System you ever used? - Windows - Linux - MacOS And what are you using now?
English
668
8
225
35.3K
🃏
🃏@anupamrjp·
Your SaaS has 5 seconds ⏳ No buzzwords. No fluff. What pain does it remove instantly? Drop it 👇
English
11
3
8
411
CoreLumen
CoreLumen@corelumen·
Seems pretty easy. Just get a computer fan and hook it up to the tube with a lightweight cover on the end. Hook the fan up to a switch with an Arduino controller connected to a CO2 monitoring probe. Simple script: On/off for the fan. ...or don't worry about indoor CO2.
@levelsio@levelsio

I still haven't solved the CO2 bedroom challenge You open the window and you wake up from a 6am garbage truck or barking dogs and sunlight You close it, you suffocate in 1200 ppl at 5am I guess you really need some mini tube in your wall with a vent that opens and closed based on internal CO2 but how do I build that?

English
0
0
0
19
CoreLumen
CoreLumen@corelumen·
This is the most stars any of my projects have ever gotten on @github! Thanks to everyone who has found Eventloom and Pathlight useful! github.com/syndicalt
CoreLumen tweet media
English
0
0
0
10
Your MVP Guy
Your MVP Guy@Sherifdeenolat2·
Founders, drop your App idea and people tell you if they'd actually use it Drop your project (link + 1 sentence) and others reply with: - I would use - I wouldn’t use & why If you post take some time to review others 👇👇
English
14
1
13
463
Germán Merlo 💻 🇦🇷
Your product does NOT deserve to be unknown! - 5 words tops - URL if launched Let's drive some traffic 👇
English
11
0
8
248