general abstract nonsense

3.5K posts

general abstract nonsense banner
general abstract nonsense

general abstract nonsense

@damienstanton

(he/him) sr. research software engineer @pwc & @cuboulder @gradbuffs student, focuses: type theory, programming languages, data science, distributed systems

UTC -4 | -5 Katılım Nisan 2010
1.3K Takip Edilen436 Takipçiler
Jonathan Blow
Jonathan Blow@Jonathan_Blow·
@damienstanton This has always been how I learn best. I think the problem so far is just that when I read explanations of how CT is supposed to help things I am interested in (like programming), the explanations look wrong to me. But maybe I haven't found the right application area.
English
3
0
4
2.5K
Jonathan Blow
Jonathan Blow@Jonathan_Blow·
Every time I read something to try and learn more about Category Theory, I come away unimpressed. It just always seems like vapid nonsense. If I were to read one and only one book (or paper/video/etc) to convince myself that category theory is good, what should it be?
English
105
9
264
233.8K
general abstract nonsense
general abstract nonsense@damienstanton·
@Jonathan_Blow Or to put it differently, category theory is only really useful at the interfaces between mathematical systems or paradigms, which is why it is so vapid; outside of those interfaces it’s not going to produce much, but at them it’s a powerful scientific tool.
English
0
0
0
46
general abstract nonsense
general abstract nonsense@damienstanton·
@Jonathan_Blow Since CT is abstract as a foundational principle (which is weird), to me, the best way to approach it is to marry it with a concrete math you already care about like algebra, type theory, geometry etc. I like Bob Harper’s “computational trilogy” approach. ncatlab.org/nlab/show/comp….
English
2
1
8
2.6K
general abstract nonsense
general abstract nonsense@damienstanton·
@charles_irl It isn’t not programming, exactly but I have mostly used GPT to explore ideas in type theory (and sometimes I ask for code listings to illustrate our conversations). The most useful thing I have found is synthesizing/comparing/contrasting the content of papers or groups of papers
English
0
0
0
13
Charles 🎉 Frye
Charles 🎉 Frye@charles_irl·
Anyone using ChatGPT for work tasks _outside of computer programming_ want to share what's worked/not worked for you? I mostly use it for code, where I've developed a sense for where it helps and how to use it, and I'm curious about other areas -- copy-writing, editing, ideation
English
32
3
43
12.9K
general abstract nonsense
general abstract nonsense@damienstanton·
@dvassallo I’ve volunteered with the TEALS org for a while and the standard HS curriculum is MIT Scratch then onto Python or Java. But personally I would start with Racket! It’s one IDE and place to learn and follows a well structured path with teaching languages: docs.racket-lang.org/drracket/htdp-…
English
0
0
0
26
Daniel Vassallo
Daniel Vassallo@dvassallo·
My 9yr old wants to start learning how to make computer games. I was making small games in C at his age, but I’d rather spare him the trauma :) What’s the best tech nowadays for teaching kids to code a Tetris-like game while also learning some programming fundamentals?
English
615
171
2.3K
1.5M
Rebecca Valentine
Rebecca Valentine@defnotbeka·
what's the most alien, weird, bizarre music you've ever encountered? music that sounds like a nonhuman species created it, but which never the less still feels like music, in that it has clear structure (ie, no noise music or similar stuff that feels random)
English
35
2
43
8.1K
general abstract nonsense
general abstract nonsense@damienstanton·
@steveklabnik Yes! In particular I love the description of .await as a composition operator. Explained in a clear way like this, I find that thinking of async/await as a certain kind of state monad can make it easier to understand.
English
0
0
0
174
general abstract nonsense
general abstract nonsense@damienstanton·
@ChShersh It would be a great thing! The more people learn about it, the better. With 5.0, OCaml is the only ~ mainstream language I know of that embraces algebraic effects & handlers as a first class construct.
English
0
0
1
53
Dmitrii Kovanikov
Dmitrii Kovanikov@ChShersh·
I wanna make an OCaml course. But now it's not the time. I spent 8 years teaching Haskell, using it in various industries, OSS, blogging, and participating in the community. That's why my Haskell course is THE SHIT. My OCaml course won't be that good. RemindMe! 10 years
English
8
0
67
6.7K
Ben Golus⚠️⭕
Ben Golus⚠️⭕@bgolus·
So with GPUs now able to read directly from the SSD, and allowing shaders to spawn shaders, what's the ETA on games being 100% run on the GPU and the CPU only existing to ferry user input and networking data around?
English
57
46
1.3K
184.2K
general abstract nonsense retweetledi
(((ل()(ل() 'yoav))))👾
huh? so when gpt4 was thought to be a really really big gpt3 people were like "WOW AMAZING" and now with the rumor of it being 8*220B mixture of experts with small inference trick they are like "Oh, Mixture of Experts? thats what you do when you are low on ideas"?
English
14
15
330
85.4K
general abstract nonsense retweetledi
LLM Security
LLM Security@llm_sec·
* People ask LLMs to write code * LLMs recommend imports that don't actually exist * Attackers work out what these imports' names are, and create & upload them with malicious payloads * People using LLM-written code then auto-add malware themselves vulcan.io/blog/ai-halluc…
English
77
2.1K
7.4K
1.8M
general abstract nonsense retweetledi
AK
AK@_akhaliq·
Can Large Language Models Infer Causation from Correlation? paper page: huggingface.co/papers/2306.05… dataset: huggingface.co/datasets/causa… Causal inference is one of the hallmarks of human intelligence. While the field of CausalNLP has attracted much interest in the recent years, existing causal inference datasets in NLP primarily rely on discovering causality from empirical knowledge (e.g., commonsense knowledge). In this work, we propose the first benchmark dataset to test the pure causal inference skills of large language models (LLMs). Specifically, we formulate a novel task Corr2Cause, which takes a set of correlational statements and determines the causal relationship between the variables. We curate a large-scale dataset of more than 400K samples, on which we evaluate seventeen existing LLMs. Through our experiments, we identify a key shortcoming of LLMs in terms of their causal inference skills, and show that these models achieve almost close to random performance on the task. This shortcoming is somewhat mitigated when we try to re-purpose LLMs for this skill via finetuning, but we find that these models still fail to generalize -- they can only perform causal inference in in-distribution settings when variable names and textual expressions used in the queries are similar to those in the training set, but fail in out-of-distribution settings generated by perturbing these queries. Corr2Cause is a challenging task for LLMs, and would be helpful in guiding future research on improving LLMs' pure reasoning skills and generalizability.
AK tweet media
English
35
268
1.2K
341.5K
general abstract nonsense
general abstract nonsense@damienstanton·
This is also great advice for any discipline, artistic or otherwise. Real, tangible career growth is as much about connecting with people who help motivate, mentor, and inspire as it is about honing the skills.
shawn kelly@Shawnimator

About 15 years ago, I stopped asking to work on specific films at ILM and instead started asking to work with specific PEOPLE. Artists I knew I could learn from. Nothing has boosted my career growth and job satisfaction more than that single conscious choice.

English
0
0
1
58
Patrick Mineault
Patrick Mineault@patrickmineault·
Favorite podcasts on machine learning, neuroscience or general nerdy matters?
English
48
13
237
108K
general abstract nonsense retweetledi
Gonçalo Hall
Gonçalo Hall@Gonzohall·
@paulg Remote work when well done is a superior management model. Most of those companies never implemented key aspects of remote work like: - async first communication - documentation first approach - proper feedback loops and performance review
English
4
8
152
30.6K
Firas D
Firas D@firasd·
@goodside I'm thinking of a term for this: Oracle Fallacy People really treat ChatGPT like an Oracle so they think it can do things it can't (similar to when people ask "why did you say that?", "what were you trained on?", "is that a real link?" etc)
English
2
8
51
60.5K
Riley Goodside
Riley Goodside@goodside·
Friend: I just learned about temperature. Now I use it all the time in ChatGPT! Me: You can't set temperature in ChatGPT. Friend: What do you mean? You just...
Riley Goodside tweet media
English
83
177
2.5K
1.1M
general abstract nonsense
general abstract nonsense@damienstanton·
@NickADobos @offtheblok GPT and similar models seem poor at understanding abstraction boundaries and how software systems should be designed at the level of modules or type theory. They’re going to push the human work into more of a kind of pair programming using rich specification languages (e.g. Agda)
English
1
0
0
18
Nick Dobos
Nick Dobos@NickADobos·
Have seen multiple experienced devs arguing "chatGPT won't replace devs, most coding work isn't writing code" You honestly think an hour long meeting w/10 people at $200k+/yr ea will come up with a better product, design & implementation plan, faster & cheaper than Ai? ok LOL
English
91
28
557
221K