Kanishk Patel

61 posts

Kanishk Patel banner
Kanishk Patel

Kanishk Patel

@above_almighty

Building Memory Layer for Physical Life

Calgary, Alberta Katılım Mart 2020
297 Takip Edilen79 Takipçiler
Kanishk Patel
Kanishk Patel@above_almighty·
@samzliu Nice! Do share if possible. Adding ASR would be great but achieving low latency would be tough in my opinion. But definitely a good addition. I did add the ability to perform web research claims and make them stronger.
English
1
0
0
16
Sam Z Liu
Sam Z Liu@samzliu·
Love this! Two other things I've been playing with on my custom set-up: 1) Voice agent attached to the editor. This way I can riff off the writing aloud, which activates a different part of my brain. 2) Ability to regenerate text in a specific direction (e.g. make it more professional, make the claim less strong, etc.). In an idea world, I'm imagining a spider diagram of different attributes I can drag around and the agent gives me like 3-4 different versions based on that
English
1
0
1
48
Kanishk Patel
Kanishk Patel@above_almighty·
I often write articles to learn new things and share what I have learnt. With AI it has been easier but at the same time my learning from the articles have also reduced so I came up with Mati, it is a custom WYSIWYG with an AI twist. It pushes back it asks me questions and tells me if my claims are unsubstantiated? It helps me avoid AI sycophancy and make sure I learn and share the right things. Do you think this is something that people will use. Attaching a demo below. Hoping for some feedback!
English
3
0
2
94
Kanishk Patel
Kanishk Patel@above_almighty·
@henrytdowling ys but not just comments it analyses and understand and pushes back on you.
English
0
0
1
10
Henry Dowling
Henry Dowling@henrytdowling·
@above_almighty oh this is cool, so it's like google docs comments but the ai comments autonomously?
English
1
0
1
17
Kanishk Patel
Kanishk Patel@above_almighty·
I write articles to learn and also help people Learn Agentic AI. With AI it has become easier but I follow a strict process to write my articles and I have been able to replicate it using Claude Code and some creativity. The code is open source and at -> github.com/kanishkpatel19… Wanted feedback on how can I make it better and empower more people to write and share there knowledge. Also feel free to provide suggestions on UI, code quality and LLM usage or something you would wanna see in this product.
English
1
0
0
41
Kanishk Patel
Kanishk Patel@above_almighty·
The AI development has so many parallels with industrial revolution. Today we 1000s of catalogued parts for cars. I would be surprised if LLM's of the models would be the same. Speacialised parts put together to create an engineering marvel that acts human. I write about things here -> learnagentic.substack.com/p/every-transf…
English
0
0
0
8
Kritika
Kritika@kritikakodes·
I need a startup name that sounds like it just raised $50M
English
261
12
369
48.9K
Kanishk Patel
Kanishk Patel@above_almighty·
I am building Founsi AI ... Memory layer for your physical life. Would love to connect wth people in this space....
English
0
0
0
24
jenna ☁️
jenna ☁️@jnananaa·
i've been building in public for a few months now and i finally hit 1k followers 🥳 if you're also a builder, designer, or a founder, i would love to connect! feel free to say hi :)
jenna ☁️ tweet media
English
224
1
269
9K
Suni
Suni@suni_code·
Drop your project URL 👇🏻 Let’s drive some traffic...
English
884
10
320
54K
Alex Lieberman
Alex Lieberman@businessbarista·
I want to start an AI community for executives. This will be a space for people to share killer use cases, agentic workflows/agents, post-AI org structure, AI governance, AI training/enablement, change management, and more. Comment “AI-native” if you want to join.
English
1.8K
33
1.1K
182.3K
Kanishk Patel
Kanishk Patel@above_almighty·
@jihanyang13 @amilabs Congrats on the move! The persistent memory piece is what got me excited. We're building the first open-source eval dataset for visual/spatial memory systems at Founsi AI - would love to chat about how AMI Labs thinks about evaluating memory.
English
0
0
0
23
Kanishk Patel
Kanishk Patel@above_almighty·
@adxtyahq Lol buy a mac or linux minimum 32 gb in case you wanna run LLMs
English
0
0
0
9
aditya
aditya@adxtyahq·
never buy a 16GB RAM laptop in 2026. you’ll regret it within a week
English
915
346
14.1K
3.9M
Kanishk Patel
Kanishk Patel@above_almighty·
@qatarairways These guys booked me a flight from Delhi to Calgary my original itinerary ahmedabad to Calgary. How am i suppose to magically appear in Delhi ? Please tell me and now the customer support says it is my fault @qatarairways @qrsupport I have no idea but this is a nightmare.
English
5
0
3
72
Kanishk Patel
Kanishk Patel@above_almighty·
@qatarairways Improve your support please. It is a nightmare. You need to get more options. I can clearly see flight availability for my routes yet you don’t offer them. Worst airlines ever. Never booking again via Qatar.
English
5
0
1
26
Kanishk Patel
Kanishk Patel@above_almighty·
@ihtesham2005 Very interesting! How does it handle reasoning over temporal space? Also I believe we are missing a key piece i.e. psychometrics or per say human behaviour, for each person what is important to remember or save is different. Henceforth self evolution might not help!
English
0
0
0
6
Ihtesham Ali
Ihtesham Ali@ihtesham2005·
RIP flat RAG ☠️ ByteDance just open-sourced OpenViking and it exposes everything wrong with how we've been building AI agent memory. Here's what every agent framework gets wrong: Memories live in one place. Resources in another. Skills scattered everywhere. And when you need context, you're doing flat vector search and hoping for the best. That's the problem. OpenViking fixes all of it with one idea: treat agent context like a file system. Everything lives under a unified viking:// protocol. Memories, resources, skills all organized in directories with unique URIs. Agents can ls, find, and navigate context like a developer working a terminal. But the real breakthrough is tiered loading: → L0: one-sentence abstract for quick lookup → L1: ~2k token overview for planning decisions → L2: full details loaded only when actually needed Most agents dump everything into context and pray. OpenViking loads only what's needed, when it's needed. Token costs drop. Accuracy goes up. And retrieval actually makes sense now. Instead of one flat semantic search, it does directory-level positioning first, then recursive refinement inside high-score directories. You can literally watch the retrieval trajectory no more black box. The self-evolution piece is wild too. At the end of every session, it automatically extracts learnings and updates agent and user memory. The agent just gets smarter the more you use it. 9K stars. 13 contributors. Built by the ByteDance Viking team that's been running vector infrastructure since 2019. 100% Opensource. Apache 2.0. Link in comments.
Ihtesham Ali tweet media
English
78
177
1.2K
111.8K