Kameron Decker Harris

6.3K posts

Kameron Decker Harris banner
Kameron Decker Harris

Kameron Decker Harris

@KameronDHarris

Math, computation, networks, neurosci, biology / snow and other fun things. Here for the strange animals. Assistant Prof @WWU Computer Science

🌋 Katılım Mayıs 2013
392 Takip Edilen917 Takipçiler
Kameron Decker Harris
Kameron Decker Harris@KameronDHarris·
@MelMitchell1 Bezos has enough money not to care, and will probably profit from a T**** admin enough to make it worth it, I suppose
English
0
0
1
163
MountRainierNPS
MountRainierNPS@MountRainierNPS·
The Longmire-Paradise Road may have a delayed opening, close early, or remain closed the entire day due to weather or staffing conditions, as determined by the Winter Road Opening Matrix: go.nps.gov/WinterRoadOpen… 2/2
English
3
2
2
2.6K
MountRainierNPS
MountRainierNPS@MountRainierNPS·
Starting Sat 10/26/24 the road to Paradise closes nightly at the gate at Longmire. The operating schedule for the winter season has not yet been announced; check Alerts daily for status of the Paradise Road: go.nps.gov/MountRainierAl… 1/2
MountRainierNPS tweet media
English
3
3
15
3.3K
Florentin Guth
Florentin Guth@FlorentinGuth·
I'm delighted to announce that the rainbow 🌈 paper has been accepted at JMLR! ➡️ updated paper with brand new intro: arxiv.org/pdf/2305.18512 We released code, along with a self-contained tutorial to reproduce our results in a simple setting: github.com/FlorentinGuth/… More below ⬇️
Florentin Guth tweet media
Florentin Guth@FlorentinGuth

I'm excited to finally share our latest paper! 📰A Rainbow🌈 in Deep Network Black Boxes ⬛️ 🧑‍🔬with Brice Ménard, Gaspar Rochette, Stéphane Mallat Every time we train a network on some dataset, we get a different set of weights yet the performance is the same. What is going on?👇

English
3
13
77
10.6K
Stéphane Deny
Stéphane Deny@StphTphsn1·
@CellTypist @crozSciTech @SaraASolla Seems like this is the paper claiming precedence over Hopfield networks: #supplementary-material1" target="_blank" rel="nofollow noopener">pmc.ncbi.nlm.nih.gov/articles/PMC38… It does seem to be in a similar ballpark but I don't understand it well. The Hopfield paper, on the other hand, is very clear to me. Maybe clarity made a difference?
Stéphane Deny tweet mediaStéphane Deny tweet mediaStéphane Deny tweet media
English
1
0
0
366
Kameron Decker Harris retweetledi
Jane Manchun Wong
Jane Manchun Wong@wongmjane·
Twitter’s algorithm specifically labels whether the Tweet author is Elon Musk “author_is_elon” besides the Democrat, Republican and “Power User” labels #L224-L246" target="_blank" rel="nofollow noopener">github.com/twitter/the-al…
Jane Manchun Wong tweet media
English
153
982
4.2K
2.3M
Kameron Decker Harris
Kameron Decker Harris@KameronDHarris·
@zamakany I remember Steve and Nathan constructing their first board system in an abandoned windowless room in the applied math building. We were always a bit puzzled by why it was happening. Now they're YouTube stars!
English
0
0
2
72
Kameron Decker Harris retweetledi
UVM Larner Med
UVM Larner Med@UVMLarnerMed·
This month, #UVMLarnerMed Associate Professor of Neurological Sciences DaviBock , Ph.D., along with colleagues from Princeton, Cambridge and @NIH's The BRAIN Initiative®, presented their research on the mapping of the entire fruit fly brain, at #sfn2024 @UVMResearch @uvmvermont
UVM Larner Med tweet mediaUVM Larner Med tweet media
English
1
2
9
3.4K
Tim Vogels
Tim Vogels@TPVogels·
Help. I am keeping a running list of #neurotheory / #comp_neuro / #neuroAI / #quant_neuro groups that currently has 222 names; def. incomplete, especially re: younger groups. If you started a group since COVID, or if you want to know if you made my list, give me a ping. RT@ H'sD
English
33
26
83
22.3K
Dario Ringach
Dario Ringach@DarioRingach·
@TonyZador @ylecun The use of image-based correlation in the optical mouse was inspired by motion detection models in insects. That is well documented in Richard Lyon's interviews and writings. It is impossible to prove it would have been impossible to develop without such input.
English
1
0
5
381
Tony Zador
Tony Zador@TonyZador·
Looking for a "fact"--imagining that a single discrete discovery in neuro will have a direct 1-to-1 impact on AI--reflects a narrow and even naive view of how neuro has impacted AI. The impact involves intuition and inspiration. Cf Hopfield, Bengio, @ylecun , Hinton, etc.
Sam Gershman@gershbrain

I'd like to teach a paper which shows how a fact about the brain materially improved an AI system in a way that is unlikely to have been figured out by engineering alone. I haven't been able to find a single example of this. Suggestions welcome.

English
3
2
30
10.5K
Kameron Decker Harris
Kameron Decker Harris@KameronDHarris·
@Sauers_ @StphTphsn1 Fourier is showing up because your are looking for orthogonal basis & your data are ~stationary/smooth. Try a different kind of matrix factorization, e.g. nonnegative or ICA.
English
1
0
2
17
Sauers
Sauers@Sauers_·
@StphTphsn1 Is there a way to remove the Fourier basis so that we just have the underlying signal? Or is the Fourier basis an inherent part of the signal
English
2
0
1
387
Kameron Decker Harris retweetledi
Sebastian Seung
Sebastian Seung@SebastianSeung·
🧵on Japan's underrated contributions to neural nets. Shun-ichi Amari @UTokyo_News_en @riken_en is another one of my heroes. His 1972 paper on associative memory models modeled Hebbian plasticity using an outer product weight matrix.
Sebastian Seung tweet media
English
5
175
651
98.4K
Kameron Decker Harris
Kameron Decker Harris@KameronDHarris·
@beenwrekt It's the modern version of Jevons' paradox that I teach in my algorithms class. "More efficient hardware/algorithms can still end up being a crappier experience" OTOH my 12 year old ThinkPad running debian is still fully useful so long as I don't care about battery life
English
0
0
0
70
Kameron Decker Harris
Kameron Decker Harris@KameronDHarris·
I'd add that not everything is fairly represented in this thread
English
1
0
0
171
Kameron Decker Harris
Kameron Decker Harris@KameronDHarris·
Read this ⏬ if you care about science
Jürgen Schmidhuber@SchmidhuberAI

The #NobelPrizeinPhysics2024 for Hopfield & Hinton rewards plagiarism and incorrect attribution in computer science. It's mostly about Amari's "Hopfield network" and the "Boltzmann Machine." 1. The Lenz-Ising recurrent architecture with neuron-like elements was published in 1925 [L20][I24][I25]. In 1972, Shun-Ichi Amari made it adaptive such that it could learn to associate input patterns with output patterns by changing its connection weights [AMH1]. However, Amari is only briefly cited in the "Scientific Background to the Nobel Prize in Physics 2024." Unfortunately, Amari's net was later called the "Hopfield network." Hopfield republished it 10 years later [AMH2], without citing Amari, not even in later papers. 2. The related Boltzmann Machine paper by Ackley, Hinton, and Sejnowski (1985) [BM] was about learning internal representations in hidden units of neural networks (NNs) [S20]. It didn't cite the first working algorithm for deep learning of internal representations by Ivakhnenko & Lapa (Ukraine, 1965)[DEEP1-2][HIN]. It didn't cite Amari's separate work (1967-68)[GD1-2] on learning internal representations in deep NNs end-to-end through stochastic gradient descent (SGD). Not even the later surveys by the authors [S20][DL3][DLP] nor the "Scientific Background to the Nobel Prize in Physics 2024" mention these origins of deep learning. ([BM] also did not cite relevant prior work by Sherrington & Kirkpatrick [SK75] & Glauber [G63].) 3. The Nobel Committee also lauds Hinton et al.'s 2006 method for layer-wise pretraining of deep NNs (2006) [UN4]. However, this work neither cited the original layer-wise training of deep NNs by Ivakhnenko & Lapa (1965)[DEEP1-2] nor the original work on unsupervised pretraining of deep NNs (1991) [UN0-1][DLP]. 4. The "Popular information" says: “At the end of the 1960s, some discouraging theoretical results caused many researchers to suspect that these neural networks would never be of any real use." However, deep learning research was obviously alive and kicking in the 1960s-70s, especially outside of the Anglosphere [DEEP1-2][GD1-3][CNN1][DL1-2][DLP][DLH]. 5. Many additional cases of plagiarism and incorrect attribution can be found in the following reference [DLP], which also contains the other references above. One can start with Sec. 3: [DLP] J. Schmidhuber (2023). How 3 Turing awardees republished key methods and ideas whose creators they failed to credit. Technical Report IDSIA-23-23, Swiss AI Lab IDSIA, 14 Dec 2023. people.idsia.ch/~juergen/ai-pr… See also the following reference [DLH] for a history of the field: [DLH] J. Schmidhuber (2022). Annotated History of Modern AI and Deep Learning. Technical Report IDSIA-22-22, IDSIA, Lugano, Switzerland, 2022. Preprint arXiv:2212.11279. people.idsia.ch/~juergen/deep-… (This extends the 2015 award-winning survey people.idsia.ch/~juergen/deep-…)

English
3
0
7
1.4K