Ian Daniel
1.5K posts

Ian Daniel
@death667b
Part owner Junction3 IT Services. Studying Computer Science at QUT Volunteer at @magikcraftio
Brisbane, Queensland Katılım Haziran 2011
292 Takip Edilen172 Takipçiler

@WisdomRCN @MattWallace888 @grok I’m not a genius or anything.. but Usually a RED arrow and a RED circle tells you what you need look at.
English

@realjonnysamp @Ames2420 I still drink hose water when I am mowing without any kind of ppe. Including no shoes. The odd stone or stick hits the leg and I think that is going to strengthen my immune system.
English

@Ames2420 Kids today would hire a lawyer and file a lawsuit all with a few clicks on their iPad.
This generation in the photo was tough. They had metal slides, ate worms and drank hose water.
English
Ian Daniel retweetledi

8 minutes to liftoff of Starship!
x.com/i/broadcasts/1…
English

@shinjix2 @ClownPeasant @elonmusk @SpaceX That 1 engine is 3% thrust. Validates overpowering the engines.
English
Ian Daniel retweetledi

Despite loss of many tiles and a damaged flap, Starship made it all the way to a soft landing in the ocean!
Congratulations @SpaceX team on an epic achievement!!
SpaceX@SpaceX
Splashdown confirmed! Congratulations to the entire SpaceX team on an exciting fourth flight test of Starship!
English

@The_FJC @GuntherEagleman I actually thought it did pretty good considering he is on top of a car.
English

@GuntherEagleman The guy can dance.. The car makes for a good dance floor? Maybe they are complaining that they don't have trampolines? The car is not a good trampoline.
English

@elonmuskADO Twitter, then correct my self and say X with frustration.
And a message is a tweet still, don’t know how else to say it.
English
Ian Daniel retweetledi
Ian Daniel retweetledi

@MrBeast If I win this giveaway, I’ll give someone who likes and reshares this post: $6,900
English
Ian Daniel retweetledi

@Algomancer I - as a human being, is informed by the past. When I learn new things those past experiences “colour” what I learn newly.
I have pondered how would this look like for a machine?
English

Something I have been thinking about recently is the notion of plasticity/retrainability in neural networks. In particular when the modeling information changes over time. As models get more and more expensive to train, the ability for a model to be continually trained on new tasks and new domain data becomes increasingly important. This is obviously related to phenomena such as catastrophic forgetting, but there are actually a bunch of phenomena that can lead to plasticity loss/ expressivity loss over the course of training such as primacy bias, implicit under parameterisation, rank collapse and capacity loss. A lot of the impact of these decrease as models scale, but I think it might be a mistake to assume we don’t need to think about it and it might be something we want to benchmark new models on. I don’t have many prescriptions for these yet, or where they do and don’t apply to training generative models. But figured I'd share some work I think is interesting.
I think this is increasingly important as we begin to move towards non-stationary domains and tune models with reinforcement learning and non direct cross entropy objectives. I figured I'd dump some reading and insights that I have picked up over a bit of a rabbit hole dive.
This was my entry point to reading.
arxiv.org/pdf/2303.01486…

English










