

Ajeet
80 posts

@officialajeet
building something interesting, prev @blockchain @Chiliz




In the end, latency matters to anything we build as a human. The speed at which things happen. There's demand for high throughput and least latency in AI Agent training and response time. The faster agent trains, learns and responds - faster things can happen. It may boil down to such software being developed but it also boils down to hardware and inter chip communication latency. Photonic chips are being used for inter-chip communication and then GPU to GPU communication which is helpful. It's about time to know what next can happen beyond this to reduce latency? Quantum Computing?

The future of pre-seed will change soon and may only be limited to deep tech. It's more likely that there will be multiple companies run by an individual making millions and probably more profitable than large companies which eventually is all about sustainability in longer run. We might see early companies getting acquired more often so that big companies can sustain in the market with their edge.



CoRL (Conference on Robot Learning) is still small but it’s growing fast! I still remember NeurIPS (NIPS) in 2016 and it felt too big a the time and “overhyped”. Who would have known.




If we let some AI Agents in a simulation, how will they behave to organize and structure their world? 1. Some possibilities are they will structure and behave like humans because it has been trained with our data. 2. It might take some ideas from nature, eg. animals, insects, birds, etc 3. It might merge all known possibilities and simulate to find best 4. It will need more computation to find something interesting never done because it assesses each of those existing possibilities and try to fill those gaps. My suggestion would be to give each of those Agents, different characteristics and experience so it can bring more possibilities based on what it has experienced which will change the outcome. The only problem is that simulation agents shouldn't know anything about their pre-trained data so we can remove bias. Some data can be shared based on learning of other Agents in that simulation. But we should strive such that these agents understand the truth by realising what's true. With this maybe we can understand more about ourselves and how we have evolved and what's best as a human considering multiple possibilities. We can definitely see some world war happening between these Agents and in the end understanding - it was all a simulation where each of those data points are getting stored in a large database for humans to use and find what's best for us.


@anisha21m People follow idealogies to be part of the herd rather than creating their own because people are afraid as they will be left alone. People seek comfort by being part of the herd and it's good many times but one should decide if that idealogy is what they truly believe in.