spoilt child

503 posts

spoilt child

spoilt child

@abbahehe

Econ/CS. Quant Trader @ OMM

California, USA Katılım Kasım 2023
99 Takip Edilen42 Takipçiler
spoilt child
spoilt child@abbahehe·
@om_patel5 you're using Opus 4.6 extended bro. Please switch to Sonnet so you at least have a reasonable understanding
English
1
0
0
384
Om Patel
Om Patel@om_patel5·
saying "hello" to Claude on the Pro plan now costs 2% of your entire session usage one message. "hello, how are you?" that's it. this is why people are mass migrating to Codex right now because its literally impossible to reach limits anthropic needs to fix this before they lose the crazy amount of developers they just gained
English
457
380
6.7K
1M
Lewis 🇺🇸
Lewis 🇺🇸@ctjlewis·
Is Pangram useful? Sort of, but not for the customers it's selling to. It should be selling to Palantir and the government, not to schools. The false positive rate is also much higher than "1 in 10,000." pangram-report.vercel.app
English
21
2
100
15.3K
felpix
felpix@felpix_·
@gonebymorn i passed a few technical interviews my freshman year before i opted to become a j*bless sage, unbothered by worldly tethers such as “total compensation” drop a code forces elo or shut up please
English
2
0
35
962
felpix
felpix@felpix_·
they’re literally just freshman CS concepts? any idiot off the street can study for a month and do leetcodes
sunset vibes@gonebymorn

@dumbassngl @felpix_ @rahulsarora see thats the issue, the retard has never made it into technical interviews so he has no clue how important leetcode practice is😂😂😂😂😂

English
7
0
126
14.9K
spoilt child
spoilt child@abbahehe·
@rahulsarora @felpix_ this guy had his pick of the corporate stuff tbh. he said no to his bank gig after working there freshman year
English
1
0
1
99
arsenii
arsenii@iatskar·
hiring quants to model loans against polymarket positions at @gondorfi - 3 to 5 yoe, tier-1 hft shops only (citadel, js, optiver, etc) - irl in nyc or can move - $180-220k + 1-4% equity $15k for referral apply below
arsenii tweet media
English
78
8
191
381K
forward deployed ccp gf
forward deployed ccp gf@FangYi11101·
@hypersoren Bridgewater is hiring AI PhDs for a million a pop but it doesn’t seem to know what to do with them
English
2
0
21
3.3K
Soren Larson
Soren Larson@hypersoren·
tier 2 hedge funds loading up on “ai people” as third stage of grief HF used to have alt data and quant advantages but both commodified or soon there The hiring spree won’t last / smart ones jumping to private markets where unfair context advantages can still be found
English
9
0
73
57.9K
spoilt child
spoilt child@abbahehe·
@rucibear i think your firm has the nicest bathrooms i’ve ever used in my life. random little memory from when i visited
English
0
0
1
18
luciela
luciela@rucibear·
in the bathrooms at work, theres an inner room with a toilet behind a door with a handle, complete with sink and towels. the question is, do i wash my hands in there and/or do i wash my hands in the outer room?
English
2
0
2
115
forward deployed ccp gf
forward deployed ccp gf@FangYi11101·
Interviewing a bunch of new grads this week and every resume has a 4.0 GPA. Complete noise at this point.
English
211
263
48.6K
5.1M
am
am@orthogonalizing·
@FangYi11101 i did this then got really sloppy and now i have an abysmal gpa
English
3
0
214
13.4K
𝔐𝔽𝓩
𝔐𝔽𝓩@mean_field_zane·
Met a really beautiful math PhD student at a house party last night. My friend and I tried talking to her, she immediately said “I don’t want to talk about AI or math” and fled the party. Why were you there if you didn’t want to talk about anything real, it was a party of econ graduate students lol.
English
92
11
888
927.2K
spoilt child
spoilt child@abbahehe·
@aliuahma how did you know husbandt! i be home by 11 don’t worry
English
0
0
2
205
ali
ali@aliuahma·
my future wife is at gtc solving physical agi by selling egocentric data
English
5
4
141
7.7K
Priyanshu Priyank
Priyanshu Priyank@PriyanshuP1405·
AI won't take our jobs but he will
Priyanshu Priyank tweet media
English
3
0
78
4.9K
Ciaran Marshall
Ciaran Marshall@microfounded·
The best place to get an undergraduate economics degree in the world (although I'm biased here!) is at the frontier once again!
Antonio Mele@antoniomele101

This is a great point, @arindube , and I am really guilty of not sharing more about what I have been up to on the teaching side of things, given that this is what they pay me for! Let's get back on track with a mega post. At @LSEEcon, we've been running a series of structured experiments to figure out exactly how GenAI changes the production function of economics education. We moved past the "cheating" panic early (although we didn't really have one in our programmes) and started actively rebuilding our pedagogy around these tools. Here's what we're doing and what we're learning, and btw we will be presenting our work at CTREE 2026 in Las Vegas in late May if you are in town. The AI Economics Professor With Ronny Razin, we built a specialised, course-aligned AI tutor. The key idea Ronny had: best way to verify if a student actually understands a concept is to ask them to explain it interactively. Clearly this does not scale to the class size we have at LSE (Ronny teaches his course to 850 first-year students). But we can scale with AI! The key pedagogical principle is that the chatbot uses a Socratic framework. It refuses to hand out final answers. Instead, students are prompted with an exercise, and the chatbot asks them to identify the next step in a mathematical or logical derivation themselves, guiding them through the reasoning rather than short-circuiting it. It adapts to the students' level, for example by clarifying concepts or notation if needed. This gives students access to 24/7 personalised tutoring, levelling the playing field for those who might hesitate to speak up in small classes or office hours, and solving Bloom's "2-sigma problem" in economics education. Notice that we didn't train the bot or fine-tuned t to our course material. We just provided a system prompt embracing the Socratic approach, and the solutions to the exercise students had to solve. That's it. Off the shelf LLM model (it was Gemini 2.5 Flash). We did run a small experiment for a game theory exercise, where students had to work out strictly dominated strategies, and pure and mixed strategy Nash equilibria. The feedback we received is overwhelmingly positive: students found it useful to work through the reasoning with the chatbot, and it helped them understand the material better. We are also in the process of establishing if the use of the chatbot improves marks in the final exam, although we don't have a full analysis yet. But I can say that this was a very good year for the distribution of marks in this course, way above the average of previous years. If this proves as good as it looks, next step is to scale this to more courses, potentially expand to similar disciplines in LSE, and potentially expand to other universities. Stay tuned. AI Feedback Experiment Providing high-quality, scalable formative feedback is one of the hardest problems in our job. It's incredibly labour-intensive, and the result is that students often get too little feedback, too late to act on it. Main problem, again, is scale. Can we use AI to enhance our feedback process? We did an experiment with @MichaelGmeiner2 in one of our MSc courses. Michael is a great teacher. In his Econometrics course, he teaches students how to write referee reports, and provides feedback to each one of them on 5 submitted referee reports. We thought, why don't we provide two feedback reports for each submission, one AI-generated and one human-generated? This will allow us to evaluate how good the AI feedback is with respect to human feedback (well, Michael's feedback, which is superhuman in my view, but ok). And so we did. We didn't say which is which to students, to avoid any kind of bias. And again, we just cooked up a prompt for the LLM to generate feedback on the referee report, we provided the AI with the paper to referee, the referee report submitted by the student, and nothing more. We found out that students rated the AI-generated feedback as less useful than the human-generated, although not by a lot. Main problem with the AI-generated feedback is that it is too generic, and does not address the specific TECHNICAL issues that the student may have missed in their report. It is also too positive, and does not provide the student with the critical feedback that they need to improve. In particular, students highlighted that the AI feedback did not enhance their critical thinking, and did not address methodological problems in the research article they were refereeing. Some of these aspects can be addressed with a better prompt, and we are working on it. The technical and methodological issues can also be addressed by providing a summary of what the teacher expects students to criticise in the paper, although there may be additional challenges in this approach (what if the student finds something else to criticise that the teacher did not think of? it happens all the time). Students also mentioned they think the two pieces of feedback are complementary, and they will be happier getting both that just one of them. This points in the direction of a hybrid approach, where AI is used to enhance the human feedback process, rather than substituting it. The caveat is, of course, that we haven't used the most recent models, we didn't try with mixture-of-experts and all the tricks in the book. Teaching Python & RELAI Principles Perhaps our biggest curriculum shift: with @JADHazell we pioneered teaching AI coding tools to first-year students. In the first year macro course that Joe teaches, we introduce students to Python coding for economic analysis. This year, we decided to move in a different direction: since the advent of AI coding agents, we believe it is more important to be able to READ and ORGANISE code than writing it. It is more important to be able to explain your intent to the AI coding agent, and verify that intent has been reflected in the code, than to be able to write the code yourself and test it. But how can you teach students that have never seen a line of code to do that? Introducing Reverse Engineering Learning with AI (RELAI). Start with a full snippet of Python code. The student is told to prompt the AI to explain what the code does. Once the student understands what the code does, it can asks about the syntax and the programming concepts behind the snippet. Then can ask a study plan for those concepts, if needed. Then can try to enquire the AI about what would happen if I change this line or this parameter. Then it can experiment itself by changing the code, and debug with the help of the AI. Finally, the student can ask the AI to produce new code, based on what was learned, and the new intent. I call this the EXPLORE approach: Examine the code, eXplain what it does, Probe deeper, Link to economics, Output prediction, Recreate understanding, Extend with modification. Once students are familiar with AI coding agents, they are assessed with a challenging coursework that Joe created. The assignment has a part that is difficult to do without AI, but should be feasible with AI. There are open ended questions where students have to go beyond the simple repetition of what was learned in the course, possibly explore new datasets and questions, etc. We think this approach can help integrate AI coding agents into the curriculum in a meaningful way, and help students develop a deeper understanding of coding tools in a faster and more efficient way. Coursework is on the way, so we will be able to evaluate the impact of this approach in the next few months. I personally believe RELAI can be adapted to other topics and subjects, and can become one of the way we interact with AI when learning something new. Read more about our approach here: python-ec1b1.vercel.app AI as a productivity tool This is where you can really go nuts. I have used AI to produce new teaching material for several workshops and courses. Slides, assignments, exercises, etc. The last few exams were written with AI tools, creating a series of questions first with suggested solutions, and then choosing the most appropriate ones. I use a coding agent (@cursor_ai ) with access to my teaching materials and past exams, so that it is aware of the content and style. You get a very good exam draft in minutes, and can edit, change questions, generate new ones, etc. It used to take me days to write a good exam, now it takes me a few hours in the afternoon. I used Cursor to do deep research about a new course I wanted to design. I asked for topics, examples, current research in the field that I may not have been aware of, similar courses' syllabi, and in general what was the state of the art in the field. I got a very long list of topics that I could choose from to design my own course, based on my taste, interest and what I think my students should know. I could generate different versions of the same course for different levels (UG, MSc and PhD). Conclusion We are still at the early stages of this journey. We are learning a lot, and we are still figuring out how to best use AI to enhance our teaching. One important thing you may have noticed is that we first define our pedagogical approach and then we integrate AI tools to support it. The other principle should be, design not for the tools you have now, but the ones you will have in a few months or years. If you have comments, or have been running similar experiments, I will be happy to hear from you.

English
1
0
0
1K
𝔐𝔽𝓩
𝔐𝔽𝓩@mean_field_zane·
@edmund_vilchez @microfounded LSE actually arguably has the second-best undergraduate economics program in the world. They’re the only school other than Chicago that teaches macro right.
English
3
0
1
236