Jana Doppa

1.1K posts

Jana Doppa

Jana Doppa

@janadoppa

Huie-Rogers Endowed Chair Professor of CS @WSUPullman; alumni @OregonState @IITKanpur; AI, ML, Computing Systems, AI for Science and Engineering

Pullman, WA เข้าร่วม Haziran 2021
248 กำลังติดตาม460 ผู้ติดตาม
Jana Doppa รีทวีตแล้ว
Stat.ML Papers
Stat.ML Papers@StatMLPapers·
Conformal Margin Risk Minimization: An Envelope Framework for Robust Learning under Label Noise ift.tt/4cxoMsI
English
0
3
15
852
Jana Doppa
Jana Doppa@janadoppa·
The workshop will highlight both foundational advances and real-world applications in domains where online experimentation is costly, unsafe, or infeasible, including scientific discovery, engineering design, healthcare, education, recommender systems, and beyond.
English
1
0
3
160
Jana Doppa รีทวีตแล้ว
Alan Fern
Alan Fern@AlanPaulFern1·
We’re hiring! Oregon State University is recruiting a Tenure-Track Faculty Position in Robotics. jobs.oregonstate.edu/postings/177017 Hard to beat the Pacific Northwest + excellent robotics graduate program combo.
English
1
3
19
1.8K
Jana Doppa รีทวีตแล้ว
Cory Simon
Cory Simon@CoryMSimon·
check out our new paper on adaptively allocating Monte Carlo samples of MOF-adsorbate configurations for efficient, multi-fidelity computational screening of MOFs for an adsorption property using molecular simulations. we view each MOF as a slot machine, then apply top-K arm identification algorithms, developed for the multi-armed bandit problem in reinforcement learning, to sequentially and adaptively allocate the Monte Carlo samples among the MOFs, in a data-driven manner, to obtain the most accurate top-K subset under a fixed sample budget. we propose our own heuristic, narrowing exploration. 🙏 led by my PhD student Qia Ke, co-advised by @janadoppa, @scobo06, @huazheng_wang. pubs.acs.org/doi/full/10.10…
English
1
2
9
283
Jana Doppa รีทวีตแล้ว
Arvind Narayanan
Arvind Narayanan@random_walker·
Today's conversations about AI-assisted programming are strikingly similar to those from decades ago about the choice between low-level languages like C versus high-level languages like Python. I was in college back then and some of our professors reassured us that the same issues had come up in the assembly-vs-compiled-languages debate from their own student days! (If I were to guess, the switch from machine code to assembly even earlier must have led to similar discussions as well.) The trade-off is always the same: productivity versus control. And the challenge is how to switch to the new paradigm in a way that enhances your skills (at least the ones you care about) instead of offloading too much and letting your skills atrophy. Some approaches prove too hasty. Vibe coding is turning out to be a dead end because it offloads too much, just as WYSIWYG editors were a dead end for building web apps. But that doesn't mean we were forced to stick to raw HTML/JS: frameworks turned out to be the way forward. When a new paradigm comes along, it takes months if not years of practice to figure out how to make it work for you. There are always many people dismissing the new thing too quickly. I was one! There are some embarrassing mailing list posts from the early 2000s in which I complained about Python and kids who can't code like real programmers do 🤦 While it's good to be open-minded, I'm not saying everyone needs to jump on the bandwagon. After all, low-level programming languages haven't gone away. Of course, some people claim that AI is unlike previous waves of automation and can replace programmers. Maybe. The reason I disagree — and see AI as parallel to previous waves of productivity improvements in software engineering — is fourfold. (1) It's a matter of accountability, not just capability. (2) Writing the code was never the bottleneck. (3) I think we're underestimating the ability of experts to stay on top of even rapid AI capability increases by using these tools to dramatically expand what they can build, how well, and how quickly. (4) As these productivity improvements take shape, the potential growth in _demand_ for software is practically infinite, unlike trades where there is a fixed amount of work that needs to get done. For example, the idea that a car would contain ~100 million lines of code would have seemed head-explodingly implausible in the early days of programming. Many people have observed that software seems to be one of the only fields that is undergoing a rapid transformation due to AI. The usual reason they give is that capability improvements in AI coding have been particularly rapid. I think this is only part of the story. The bigger factor is structural. Software has a history of repeatedly undergoing seismic shifts in the technologies of production, so it has never had time or the cultural inclination to ossify institutional processes around particular ways of doing things.
Arvind Narayanan tweet media
English
67
100
568
95.6K
Judea Pearl
Judea Pearl@yudapearl·
On behalf of this educational channel, I am happy to congratulate our reader/follower @eliasbareinboim on his election to Fellow of the Association for the Advancement of Artificial Intelligence: "For significant contributions to the theory of causality in AI and its applications." AAAI will celebrate the newly elected Fellows at the AAAI-26 awards ceremony, Thursday January 22nd at 8:30am. Congratulations on this well-deserved honor!
English
5
5
64
5.4K
Jana Doppa รีทวีตแล้ว
Alan Fern
Alan Fern@AlanPaulFern1·
Imagine moving a heavy object with a joystick—through a swarm of quadruped-arm robots. 🕹️ decPLM: decentralized RL for multi-robot pinch-lift-move. • No comms or rigid links • Hierarchical RL + constellation reward • 2→ N robots, sim→real 🔗 decplm.github.io
English
15
112
627
59.5K
Jana Doppa รีทวีตแล้ว
Thomas G. Dietterich
Thomas G. Dietterich@tdietterich·
Join our NeurIPS 2025 social event: The Role of AI in Scientific Peer Review. Help build community and explore solutions for a fair, efficient, and transparent peer review system. Wed. Dec. 3rd, 7:00 PM – 9:00 PM #neurips2025
English
5
17
90
15.3K
Jana Doppa รีทวีตแล้ว
IIT Kanpur
IIT Kanpur@IITKanpur·
A visionary educator, researcher, and pioneer, Prof. Vaidyeswaran Rajaraman will always be remembered as one of the foremost architects of computer science education in India. Joining IIT Kanpur in 1963 as a faculty member in the Department of Electrical Engineering, he played a pivotal role in shaping the institute’s early computing curriculum. In 1965, with the encouragement of Prof. H. K. Kesavan and in collaboration with his colleagues, he launched an M. Tech. programme with Computer Science as an option — the first time the subject was introduced as an academic discipline in India. His foresight and leadership were instrumental in laying the foundation for computer science education in the country. A recipient of the Shanti Swarup Bhatnagar Prize, Om Prakash Bhasin Award, Homi Bhabha Prize, and the Padma Bhushan, Prof. Rajaraman was a mentor who inspired countless students and colleagues through his humility, clarity of thought, and passion for teaching. As the institute mourns the loss of Prof. Rajaraman, we also celebrate a life devoted to knowledge, innovation, and the pursuit of excellence — a legacy that continues to illuminate India’s scientific and technological journey.
IIT Kanpur tweet media
English
20
51
238
38.4K
Jana Doppa
Jana Doppa@janadoppa·
Key Ideas: - Frame the problem as a mini-max objective - Iteratively solve it by combining offline RL (to update distribution of policies) with no-regret online optimization (to select Lagrange variable) - Proof for approximate optimality w/ no-regret online optimization
English
1
0
0
61
Jana Doppa
Jana Doppa@janadoppa·
How can we learn safe and reward maximizing policies from offline datasets? @Yassine62754892's NeurIPS-2025 paper gives a theoretically-grounded and effective framework for offline safe RL. - allows to plug-and-play with any offline RL algorithm - avoids off-policy evaluation
English
1
2
2
300
Jana Doppa รีทวีตแล้ว
Chieh-Hsin (Jesse) Lai
Chieh-Hsin (Jesse) Lai@JCJesseLai·
Tired to go back to the original papers again and again? Our monograph: a systematic and fundamental recipe you can rely on! 📘 We’re excited to release 《The Principles of Diffusion Models》— with @DrYangSong, @gimdong58085414, @mittu1204, and @StefanoErmon. It traces the core ideas that shaped diffusion modeling and explains how today’s models work, why they work, and where they’re heading. 🧵You’ll find the link and a few highlights in the thread. We’d love to hear your thoughts and join some discussions! ⚡ Stay tuned for our markdown version, where you can drop your comments!
Chieh-Hsin (Jesse) Lai tweet media
English
53
488
2.3K
824.1K
Jana Doppa รีทวีตแล้ว
Hugo Larochelle
Hugo Larochelle@hugo_larochelle·
We at TMLR are proud to announce that selected papers will now be eligible for an opportunity to present at the joint NeurIPS/ICML/ICLR Journal-to-Conference (J2C) Track: @TmlrOrg/tmlr-joins-neurips-icml-iclr-journal-to-conference-track-937a898eab3d" target="_blank" rel="nofollow noopener">medium.com/@TmlrOrg/tmlr-…
English
14
77
462
141K