Yuka Ikarashi

2K posts

Yuka Ikarashi banner
Yuka Ikarashi

Yuka Ikarashi

@c20

PhD candidate @MIT CSAIL / Trying to make things work / 🇯🇵→@00_

Cambridge, MA เข้าร่วม Şubat 2013
652 กำลังติดตาม3.2K ผู้ติดตาม
Yuka Ikarashi
Yuka Ikarashi@c20·
I will be defending on April 24th (co-located with Amanda!). Please lmk if you're interested in joining, and we hope to see you there!
Yuka Ikarashi tweet media
English
1
0
14
691
Yuka Ikarashi
Yuka Ikarashi@c20·
mass migration of proofs to appendix
English
0
23
9
4.8K
Yuka Ikarashi
Yuka Ikarashi@c20·
@cHHillee Oh no I learned about the movie after finishing the book. I bought it a few years ago but it was rotting in my shelf
English
0
0
1
257
Horace He
Horace He@cHHillee·
@c20 In preparation for the movie?
English
3
0
2
1.7K
Yuka Ikarashi
Yuka Ikarashi@c20·
Just finished reading Project Hail Mary. Good good good.
English
3
0
17
5.3K
Yuka Ikarashi รีทวีตแล้ว
Zachary Tatlock
Zachary Tatlock@ztatlock·
Dave Patterson’s op-ed is a great overview on how taxpayer-funded research built the backbone of modern tech: RISC, RAID, cloud, ... the list goes on. Encourage your representatives to fully fund the NSF and CISE! thehill.com/opinion/techno…
English
0
2
11
4.2K
Yuka Ikarashi รีทวีตแล้ว
Tomas Pueyo
Tomas Pueyo@tomaspueyo·
Can there be an invasion of Iran? Hardly. Two maps explain why, and also why Iran is the way it is today, whether its regime will fall, what other superpowers will do, and in general why Iran is the way it is today 1. Iran is a mountain & desert fortress
Tomas Pueyo tweet media
English
144
760
4.4K
745K
andrew blinn
andrew blinn@disconcision·
@c20 untrue, some of us arent graduating😭
English
1
0
1
236
Yuka Ikarashi
Yuka Ikarashi@c20·
Please retweet the post and help announce the workshop!
English
0
0
4
2.3K
Yuka Ikarashi
Yuka Ikarashi@c20·
Here is the workshop schedule. We have exciting speakers presenting their papers published at ASPLOS, OOPSLA, ICFP, PLDI, and so on. plr.csail.mit.edu
English
1
2
16
3.5K
Yuka Ikarashi
Yuka Ikarashi@c20·
We are hosting the MIT Programming Languages Review on April 25th in person here at MIT! The PLR is a student-run workshop that aims to highlight the best papers from the past year that we believe will have a significant impact on shaping the future direction of PL research.
English
3
25
117
19K
Yuka Ikarashi
Yuka Ikarashi@c20·
@GPU_MODE I cannot DM you for some reason (maybe we have to mutually follow?) I followed you so can you try DMing me? If that doesn't work, my email is "yuka at csail.mit.edu"
English
0
0
0
54
GPU MODE
GPU MODE@GPU_MODE·
@c20 Please DM me so we can coordinate a date!
English
1
0
1
46
Yuka Ikarashi
Yuka Ikarashi@c20·
@GPU_MODE We're actively working on the Exo GPU backend and I won't be able to talk about GPUs, so the talk will be around vector machines and (non-gpu) tensor accelerators. I'll be interested in giving a talk if that's okay with you.
English
1
0
1
72
GPU MODE
GPU MODE@GPU_MODE·
@c20 Would you be interested in giving a talk on GPU MODE? @GPUMODE/videos" target="_blank" rel="nofollow noopener">youtube.com/@GPUMODE/videos
English
1
0
5
340
Tikhon Jelvis
Tikhon Jelvis@tikhonjelvis·
@c20 sweet, looking forward to seeing that in the future :)
English
1
0
1
45
Tikhon Jelvis
Tikhon Jelvis@tikhonjelvis·
@c20 up on HN again: news.ycombinator.com/item?id=433626… it's catching people's imaginations :) the question in the thread about why normal compilers can't automatically do the optimizations somebody would express in Exo seems pretty interesting
English
1
0
1
84
Yuka Ikarashi
Yuka Ikarashi@c20·
@tikhonjelvis abstraction in the scheduling library. And yes I think this approach is applicable beyond dense linear algebra, and we're working on it!
English
1
0
1
62
Yuka Ikarashi
Yuka Ikarashi@c20·
@tikhonjelvis Exo is "low-level" compared to other user schedulable languages like Halide, and that's its core strength! Exo offers fine-grained control for peak performance, and provide automation via the implementation of libraries. We showed in the paper that Exo could recover Halide-level
English
1
0
1
55
Yuka Ikarashi
Yuka Ikarashi@c20·
@tikhonjelvis For dense linear algebra-like kernels, optimizations are usually structural and are agnostic to input data and runtime information. Knowing parameters like tensor shapes certainly makes the optimization easier and more efficient, but specialization increases the binary size and
English
1
0
1
51