Sabitlenmiş Tweet

this will upset a lot of AI researchers:
Every neural network since 1986 follows the EXACT same paradigm:
Human designs it → Train → Deploy → Done.
The architecture NEVER changes during training.
The nodes are all identical.
The training process is fixed.
GPT-4? Same paradigm.
Gemini 2.0? Same paradigm.
LLaMA-3? Same paradigm.
We've been stuck in a box for 40 years.
The box just got more expensive.
#AI #DeepLearning #MachineLearning #NeuralNetworks #Backprop #Architecture #Paradigm #Research #GPT4 #Gemini #LLaMA
English


























