Conscious Engines • Studio

54 posts

Conscious Engines • Studio banner
Conscious Engines • Studio

Conscious Engines • Studio

@cengines_studio

AI products for the rest of us — prodigy of @c_engines.

Bangalore, India Katılım Temmuz 2025
2 Takip Edilen722 Takipçiler
Conscious Engines • Studio
Conscious Engines • Studio@cengines_studio·
@Rocky_T07 @Kautukkundan we're kicking things off with an in-house rig for researchers and all the engineers. will bring in more fire-power once we saturate these out. and yes, we are actively hiring. ml systems intern and more. we have an exploration/cowork friday coming through, sign up.
English
1
0
1
41
Rocky
Rocky@Rocky_T07·
@Kautukkundan @cengines_studio Is the GPU centrally available to all engineers to run and train models ? or just one setup that everyone has to use. Also, how is the data storage and RAM systems built to reduce the data movement costs ? and on a side note, are you looking for a cool ML Systems intern ?
English
1
0
0
181
Kautuk | Conscious Engines
Kautuk | Conscious Engines@Kautukkundan·
We used an RTX 6000 Pro, to run a state-of-the-art immersive world model.
English
10
5
117
7.2K
Kautuk | Conscious Engines
2x RTX 6000 Pro Blackwell. 192GB VRAM. $20,000. Building [redacted] in Indiranagar, Bangalore.
Kautuk | Conscious Engines tweet mediaKautuk | Conscious Engines tweet mediaKautuk | Conscious Engines tweet media
English
59
21
842
61.5K
Conscious Engines • Studio retweetledi
sabesh 📟
sabesh 📟@sabeshbharathi·
i built a cute kitten for macOS 🐈‍⬛ that moves around your screen and follows your cursor! powered by MLX and Apple Silicon, so you can talk to it locally, completely offline!
English
11
6
51
3.2K
Conscious Engines • Studio retweetledi
sabesh 📟
sabesh 📟@sabeshbharathi·
The first edition of Benne Builds went insanely well! It all started with me and @rajhraval talking about our shared love for dosas, filter coffee and building in AI. So we thought we’d handpick an exclusive list of people from our circles to join us for breakfast + building in AI + Apple Platforms. The result was insane - 40% of the folks installed xcode for the first time on their macs. Everyone demo-ed something that they built. All after demolishing scrumptious ghee pudi doses 😋 First of many to go, going forward we will announce the event beforehand and pick out newer dosa spots to try out across Bengaluru and the guest-list will always be exclusive and curated with a mixture of designers and developers and at least 40% women representation. Remember, the first rule of benne builds is you don’t talk about benne builds (until it gets over xD)
sabesh 📟 tweet mediasabesh 📟 tweet mediasabesh 📟 tweet mediasabesh 📟 tweet media
English
5
5
61
3.3K
Conscious Engines • Studio retweetledi
Kautuk | Conscious Engines
Kautuk | Conscious Engines@Kautukkundan·
This Saturday we brought 80+ AI film makers together at at Morphic and Cinic's Ad making competition Always a pleasure when the @c_engines HQ fills up like this - Also this was the biggest gathering yet. I love being around smart people pushing AI into new territory and building this space is our small contribution to making sure that keeps happening in Bangalore! More to come.
Kautuk | Conscious Engines tweet mediaKautuk | Conscious Engines tweet mediaKautuk | Conscious Engines tweet mediaKautuk | Conscious Engines tweet media
Cinic@cinic_ai

Something viral is loading…AI AD Making Hackathon is live @morphic @houseof_rare @PocketFM_App @REVELSTUDIOSINC @Remit2Any

English
2
4
44
3.8K
Conscious Engines • Studio retweetledi
Shubham Tuteja • Jade
Shubham Tuteja • Jade@ShubhamTotu·
these research and hardware and ai and product and design labs i tell you, conscious chaos.
Shubham Tuteja • Jade tweet mediaShubham Tuteja • Jade tweet mediaShubham Tuteja • Jade tweet media
English
3
4
42
2.2K
Conscious Engines • Studio
Conscious Engines • Studio@cengines_studio·
The Agentic Summer Build-a-thon, hosted by our friends at @innercircle_so x @aiweekendsxyz, lands at CE HQ. Come by, pick a problem, and ship 🚣‍♂️ See y'all on the 18th.
Inner Circle@innercircle_so

Agentic Summer • Build-a-thon: Audio Track x @ElevenLabs ~ 100K+ credits to every single participant. ~ Winners: 1.5M credits ~ Top team: 12M credits Come build in Bangalore with @aiweekendsxyz at @c_engines Registrations open now ↓

English
3
3
21
3.5K
Conscious Engines • Studio retweetledi
Inner Circle
Inner Circle@innercircle_so·
Agentic Summer • Build-a-thon: Audio Track x @ElevenLabs ~ 100K+ credits to every single participant. ~ Winners: 1.5M credits ~ Top team: 12M credits Come build in Bangalore with @aiweekendsxyz at @c_engines Registrations open now ↓
Inner Circle tweet media
English
15
18
112
14.8K
Conscious Engines • Studio
Conscious Engines • Studio@cengines_studio·
We published a benchmark report on LLM inference at the edge this week. The OS is the real bottleneck - not the chip. Most edge benchmarks measure peak throughput. Why sustained load matters A model that hits 30 tok/s for 10 seconds and then throttles to 8 tok/s is not a 30 tok/s model. For always-on AI - think persistent agents, continuous screen observation, background processing - the steady-state number is the only number that matters. Peak benchmarks are essentially marketing. Ours measures sustained load. That's where thermal throttling, OS safety margins, and memory pressure actually show up. -- A few sharp questions from the thread: > "Did you try going bare metal / stripped OS?" - Yes, this is exactly what we're exploring in the follow-up > "Is tokens/sec even the right metric?" - Depends on the use case. For mobile UX, fast-enough beats precise-but-slow. We track prefill latency + memory too. > "Why not just offload to cloud?" - Privacy. An always-on screen observer shouldn't touch a third-party server. Follow-up report coming soon: device-specific tuning, quantization experiments, and trying to push past what consumer hardware currently allows. Paper → arxiv.org/abs/2603.23640
Conscious Engines • Studio tweet media
English
3
3
10
1.6K
Conscious Engines • Studio retweetledi
Kautuk | Conscious Engines
Kautuk | Conscious Engines@Kautukkundan·
We just published a benchmark report we've been working on, would love to get some eyes on it from folks here. 📄 LLM Inference at the Edge: Mobile, NPU, and GPU Performance Efficiency Trade-offs Under Sustained Load 🔗 arxiv.org/abs/2603.23640 --- We benchmarked LLM inference across mobile, NPU, and GPU setups under sustained load, covering performance, efficiency, and where things break down in practice. This is the first in a series; next up is device-specific tuning experiments. Would genuinely appreciate any feedback, pushback, or thoughts, especially from people running similar workloads. Drop a reply here or reach out directly.
Kautuk | Conscious Engines tweet media
Kautuk | Conscious Engines@Kautukkundan

New hardware just got arrived at Conscious Engines. So what's up with the Jetsons? --- we just finished our first benchmark paper on LLM inference performance across edge hardware. what runs, what doesn't, what throughput looks like when the cloud isn't in the loop. Why? proactive AI only works if it's always on. always on only works if it's fast. and fast only works if the model is small enough, optimised enough, and close enough to where you actually are We ran a suite of tests on multiple consumer edge devices - raspi, arduinos, mobile phones, MacBooks and Gaming Laptops with beefier GPUs > publishing this week. --- Next up: optimisation. not just measuring how models perform on edge devices - making them perform better. quantization, pruning, architecture decisions that recover speed without losing capability. > we want to find the floor. how lean can a model get before it stops being useful. and what lives on the other side of that line. --- if you're a lab working on edge hardware and want to optimise your inference workflows. We have done the work so you don't have to. we'd love to talk. DMs open

English
9
6
38
5.3K
1LittleCoder💻
1LittleCoder💻@1littlecoder·
I did the thing 👀👀👀 Hosted a workshop on gen media powered by @fal It was a great experience to see so many innovative builders and an evening spent well! I was specifically very deliberate in not wasting participants time and I hope I did some justice to it, even though some discussions went longer than I expected and delayed start 😭 We discussed about current sota gen media (image and video) and covered a lot of use-cases. The biggest standout for me was that the participants were genuinely interested in learning something new. Glad to meet some twitter friends @marketcallsHQ and @magnumben and old friends! Lot of curious minds, high signal conversations ❤️ Also very thankful to @c_engines for the venue.
1LittleCoder💻 tweet media
1LittleCoder💻@1littlecoder

Attending in-person tech events in BLR has become a complete waste of time. Organizers are constantly seeking attention. Attendees lack civic sense. The whole event is to show something cool while lacking substance. Humans don't have interesting conversations. I hope, I don't echo the same sentiment in the event I'm hosting tomorrow 😭

English
7
2
38
3.1K
Conscious Engines • Studio retweetledi
ai & weekends
ai & weekends@aiweekendsxyz·
we’re doing ai & weekends ep7 tomorrow in bangalore. bring your laptop. bring an idea. or just show up and steal one there. we’ll spend 4 hours building something you think should exist in the world. only rule: don’t leave without shipping. see you there 🚀
ai & weekends tweet media
English
5
6
41
14.8K
Conscious Engines • Studio retweetledi
Kautuk | Conscious Engines
Kautuk | Conscious Engines@Kautukkundan·
New hardware just got arrived at Conscious Engines. So what's up with the Jetsons? --- we just finished our first benchmark paper on LLM inference performance across edge hardware. what runs, what doesn't, what throughput looks like when the cloud isn't in the loop. Why? proactive AI only works if it's always on. always on only works if it's fast. and fast only works if the model is small enough, optimised enough, and close enough to where you actually are We ran a suite of tests on multiple consumer edge devices - raspi, arduinos, mobile phones, MacBooks and Gaming Laptops with beefier GPUs > publishing this week. --- Next up: optimisation. not just measuring how models perform on edge devices - making them perform better. quantization, pruning, architecture decisions that recover speed without losing capability. > we want to find the floor. how lean can a model get before it stops being useful. and what lives on the other side of that line. --- if you're a lab working on edge hardware and want to optimise your inference workflows. We have done the work so you don't have to. we'd love to talk. DMs open
Kautuk | Conscious Engines tweet media
English
2
5
32
5.1K