Ingeun Kim

902 posts

Ingeun Kim banner
Ingeun Kim

Ingeun Kim

@IngsParty

Crypto & Blockchain { Researcher | Developer } Ex - FOUR PILLARS @FourPillarsFP @FourPillarsKR CURG @curg_official D3LAB @D3LAB_DAO Opinions are my own

Seoul, Republic of Korea Katılım Ekim 2021
2.4K Takip Edilen793 Takipçiler
Ingeun Kim
Ingeun Kim@IngsParty·
암호화폐에 대한 국세청의 과세가 2027년 1월 1일로 예정되어 있다. 특히 이번 달, ‘가상자산 통합분석시스템’ 사업자 선정에 착수하며, 4월 개발 시작, 12월 완성을 목표로 한다고 한다. 문득, 과세가 시행되면 내가 가지고 있는 암호화폐 자산에는 어느 정도의 세금이 매겨질지 궁금해서 개인적으로 사용할 계산기를 만들어 봤다. 이를 만들다보니, 과세안 등 여러 정보들을 검색하면서 새롭게 알게된 사실도 있다. 이런 정보들이 크립토인들에게 도달하면 좋겠다 생각하여 살짝의 UI 터치를 거쳐서 오픈해 보았다. 아직은 완성되지 않은 WIP(도메인도 아직 없는...) 상태이므로, 간단하게 세액을 예측하는데 사용하면 될 듯하다. 피드백은 상시 환영합니다! 😆 crypto-doomsday-calculator.vercel.app github.com/ingeun92/crypt…
파파노믹스@Papa_Nomics

코인 과세 유예는 끝났다, 국세청이 당신의 지갑을 터는 완벽한 시나리오 대중은 선거철마다 정치인들이 던져주는 "코인 과세 유예"라는 달콤한 마약에 취해, 세금은 영원히 남의 일일 거라 착각한다. 하지만 현실은 냉혹하다. 정부는 당신의 시선이 비트코인 10만 불을 향해 있을 때, 조용히 당신의 목에 칼을 들이밀 '단두대'를 완성해가고 있다. 1. 팩트체크: 시스템은 이미 돌기 시작했다 오늘(11일) 국세청이 조달청에 띄운 '가상자산 통합분석 시스템' 입찰 공고를 보자. 이건 단순한 협박용이 아니다. 업비트, 빗썸에서 뽑아낸 거래 내역은 기본이고, 해외 거래소 신고 자료, 심지어 당신의 개인지갑(메타마스크 등) 주소까지 싹 다 긁어모아 블록체인 '온체인 데이터'와 퓨전시키는 괴물 같은 시스템이다. 당신이 크립토판에서 온갖 노력으로 얻어낸 수익, 해외 거래소로 몰래 빼돌렸다고 안심하던 자금 흐름까지 최대 5년 치가 국세청 모니터 클릭 한 번에 엑셀로 뽑혀 나오는 시대가 내년 1월에 열린다. 2. 에어드랍과 스테이킹: 숨 쉬는 것까지 과세한다 가장 소름 돋는 건 과세의 '범위'다. 단순히 코인을 사고팔아 남긴 차익만 털어가는 게 아니다. 당신이 디파이(DeFi)에 예치해서 받은 스테이킹 이자, 공짜로 받은 에어드랍(Airdrop) 코인까지 싹 다 '기타 소득'으로 간주해 22%의 세금을 뜯어낼 준비를 하고 있다. 이 바닥에서 살아남으려 발버둥 치며 만들어낸 모든 온체인 활동이, 그들에겐 그저 살찌운 돼지를 도축하는 '세수 확보' 이벤트일 뿐이다. 3. 주식은 구제하고, 코인은 쥐어짠다 (기울어진 운동장) 더 끔찍한 건 '손실 이월 공제'가 안 된다는 거다. 작년에 코인으로 1억 원을 잃고, 올해 1천만 원을 벌었다고 치자. 상식적으론 9천만 원 적자지만, 국세청 시스템은 올해 번 1천만 원에 대해서만 칼같이 22%의 세금 고지서를 날린다. 주식 투자자는 보호하면서 코인 투자자는 철저하게 무형자산 취급하며 피를 말리는, 가장 완벽하고 합법적인 수탈 구조다. 대중이 "세금 내기 싫다", "형평성에 어긋난다"며 커뮤니티에 키보드로 분노를 쏟아낼 때, 시스템 구축의 톱니바퀴는 이미 돌아가기 시작했다. 복잡한 온체인 흐름을 스스로 소명하지 못하면, 수익보다 더 큰 세금 폭탄과 가산세를 맞고 시장에서 영원히 퇴출당할 것이다. 당신의 자산을 지킬 합법적인 탈출구와 구조를 미리 짜두지 않는다면, 내년 당신의 계좌는 가장 뼈아픈 방식으로 털릴 것이다. Keep going.

한국어
2
1
10
457
Ingeun Kim retweetledi
xDFi
xDFi@xd_protocol·
Stop betting. Own the house. To keep your peace, tune out the market noise. Don't chase the direction, own a system that turns volatility into profit. Stop tossing chips at a table you don't control. Be the one who owns the game.
English
0
3
9
211
Ingeun Kim retweetledi
xDFi
xDFi@xd_protocol·
xDFi’s Genesis Phase is running hot! Right now, the 100K cap is open only to those on the waitlist. In about 28 hours, the pool opens to everyone else. Deposit into xDFi now for around 5x delta-neutral yield! xdfi.net/deposit
xDFi tweet media
English
0
3
7
169
Ingeun Kim retweetledi
xDFi
xDFi@xd_protocol·
xDFi Genesis Phase Deposits are now LIVE! We’ve just opened up the 100K cap for xDFi deposits.
xDFi tweet media
English
1
3
11
500
Ingeun Kim retweetledi
xDFi
xDFi@xd_protocol·
The first 100K deposit for xDFi opens on Feb 25 at 2 PM (UTC). ! IMPORTANT ! This initial cap is reserved exclusively for wallet addresses that signed up for the WAITLIST. The window will stay open for 72 hours, but please keep in mind that it will close early if the 100K limit is reached before then. We’re also planning to distribute xD tokens equivalent to the value of your deposit, so we’d love for you to get involved! 😆 xDFi. xdfi.net/deposit
xDFi tweet media
English
0
3
10
279
Ingeun Kim retweetledi
xDFi
xDFi@xd_protocol·
Launching... 99.9% Loaded [ █ █ █ █ █ █ █ █ █ █ ] The wait is finally over! xDFi soft launch has officially landed. xdfi.net But wait, what exactly is xDFi anyway?
English
1
3
11
470
Ingeun Kim
Ingeun Kim@IngsParty·
While studying Vibe Coding, which is currently shifting the paradigm of software development, I found that the specific terminology and layer structures supporting this concept weren’t clearly organized in my mind. Consequently, I personally designed the "AI VibeCoding Layer Map" to clarify the technical mechanism through which results are generated. This map divides the process of transforming human intent into technical execution into four core layers. - Client & User Interface Layer: (@claudeai, @ChatGPTapp, @GeminiApp, @opencode) This is the starting point of Vibe Coding. It includes tools like Claude Code and Gemini CLI, where users convey their intent through natural language. This layer closely integrates the user's local files and terminal environment with the AI, helping to immediately project the user's intent onto the system without complex configurations. - Reasoning & Logic Engine Layer: This is the core of the intelligence that interprets the conveyed intent and devises strategies. While LLM perform high-dimensional reasoning, AI agents establish execution plans based on that reasoning. The agent acts as an orchestrator that self-corrects when results are incorrect and searches for the optimal outcome. - Model Context Protocol (MCP) Layer: In the past, AI required separate integration work for each specific tool it used, but MCP integrates these into a standardized data bus. It serves as a universal link that allows any model to utilize various functions and data at the lower levels through this protocol. - Tools Layer: This is where AI’s thoughts turn into actual actions. Concrete technologies such as file reading and writing, API calls, web searches, and terminal command execution are located here. This layer controls execution rules and permissions to ensure a reliable environment so that Vibe Coding can be maintained stably. This layer map demonstrates that Vibe Coding is more than just a passing trend; it is a sophisticated technological ecosystem where standardized protocols and autonomous agents are combined. Through this structure, where intelligence and tools are organically connected, developers can focus solely on essential problem-solving without being tied down by trivial syntax.
Ingeun Kim tweet media
English
1
0
8
646
Ingeun Kim
Ingeun Kim@IngsParty·
Is there a rule stating that Perp must only be used for futures trading? By integrating the diverse DeFi apps and services of the long-standing EVM ecosystem, the utility of Perp can be expanded even further. - Fully On-chain Basis Trading - Popularization of risk-free returns through a Delta-Neutral - Zero Slippage Automated Liquidation Protection System Unlike existing Perps, @HyperliquidX is turning these features into reality by leveraging "Precompile" technology.
xDFi@xd_protocol

x.com/i/article/2016…

English
0
0
3
112
Ingeun Kim
Ingeun Kim@IngsParty·
"You realize that stamina is a core bottleneck to work and that with LLMs in hand it has been dramatically increased."
Andrej Karpathy@karpathy

A few random notes from claude coding quite a bit last few weeks. Coding workflow. Given the latest lift in LLM coding capability, like many others I rapidly went from about 80% manual+autocomplete coding and 20% agents in November to 80% agent coding and 20% edits+touchups in December. i.e. I really am mostly programming in English now, a bit sheepishly telling the LLM what code to write... in words. It hurts the ego a bit but the power to operate over software in large "code actions" is just too net useful, especially once you adapt to it, configure it, learn to use it, and wrap your head around what it can and cannot do. This is easily the biggest change to my basic coding workflow in ~2 decades of programming and it happened over the course of a few weeks. I'd expect something similar to be happening to well into double digit percent of engineers out there, while the awareness of it in the general population feels well into low single digit percent. IDEs/agent swarms/fallability. Both the "no need for IDE anymore" hype and the "agent swarm" hype is imo too much for right now. The models definitely still make mistakes and if you have any code you actually care about I would watch them like a hawk, in a nice large IDE on the side. The mistakes have changed a lot - they are not simple syntax errors anymore, they are subtle conceptual errors that a slightly sloppy, hasty junior dev might do. The most common category is that the models make wrong assumptions on your behalf and just run along with them without checking. They also don't manage their confusion, they don't seek clarifications, they don't surface inconsistencies, they don't present tradeoffs, they don't push back when they should, and they are still a little too sycophantic. Things get better in plan mode, but there is some need for a lightweight inline plan mode. They also really like to overcomplicate code and APIs, they bloat abstractions, they don't clean up dead code after themselves, etc. They will implement an inefficient, bloated, brittle construction over 1000 lines of code and it's up to you to be like "umm couldn't you just do this instead?" and they will be like "of course!" and immediately cut it down to 100 lines. They still sometimes change/remove comments and code they don't like or don't sufficiently understand as side effects, even if it is orthogonal to the task at hand. All of this happens despite a few simple attempts to fix it via instructions in CLAUDE . md. Despite all these issues, it is still a net huge improvement and it's very difficult to imagine going back to manual coding. TLDR everyone has their developing flow, my current is a small few CC sessions on the left in ghostty windows/tabs and an IDE on the right for viewing the code + manual edits. Tenacity. It's so interesting to watch an agent relentlessly work at something. They never get tired, they never get demoralized, they just keep going and trying things where a person would have given up long ago to fight another day. It's a "feel the AGI" moment to watch it struggle with something for a long time just to come out victorious 30 minutes later. You realize that stamina is a core bottleneck to work and that with LLMs in hand it has been dramatically increased. Speedups. It's not clear how to measure the "speedup" of LLM assistance. Certainly I feel net way faster at what I was going to do, but the main effect is that I do a lot more than I was going to do because 1) I can code up all kinds of things that just wouldn't have been worth coding before and 2) I can approach code that I couldn't work on before because of knowledge/skill issue. So certainly it's speedup, but it's possibly a lot more an expansion. Leverage. LLMs are exceptionally good at looping until they meet specific goals and this is where most of the "feel the AGI" magic is to be found. Don't tell it what to do, give it success criteria and watch it go. Get it to write tests first and then pass them. Put it in the loop with a browser MCP. Write the naive algorithm that is very likely correct first, then ask it to optimize it while preserving correctness. Change your approach from imperative to declarative to get the agents looping longer and gain leverage. Fun. I didn't anticipate that with agents programming feels *more* fun because a lot of the fill in the blanks drudgery is removed and what remains is the creative part. I also feel less blocked/stuck (which is not fun) and I experience a lot more courage because there's almost always a way to work hand in hand with it to make some positive progress. I have seen the opposite sentiment from other people too; LLM coding will split up engineers based on those who primarily liked coding and those who primarily liked building. Atrophy. I've already noticed that I am slowly starting to atrophy my ability to write code manually. Generation (writing code) and discrimination (reading code) are different capabilities in the brain. Largely due to all the little mostly syntactic details involved in programming, you can review code just fine even if you struggle to write it. Slopacolypse. I am bracing for 2026 as the year of the slopacolypse across all of github, substack, arxiv, X/instagram, and generally all digital media. We're also going to see a lot more AI hype productivity theater (is that even possible?), on the side of actual, real improvements. Questions. A few of the questions on my mind: - What happens to the "10X engineer" - the ratio of productivity between the mean and the max engineer? It's quite possible that this grows *a lot*. - Armed with LLMs, do generalists increasingly outperform specialists? LLMs are a lot better at fill in the blanks (the micro) than grand strategy (the macro). - What does LLM coding feel like in the future? Is it like playing StarCraft? Playing Factorio? Playing music? - How much of society is bottlenecked by digital knowledge work? TLDR Where does this leave us? LLM agent capabilities (Claude & Codex especially) have crossed some kind of threshold of coherence around December 2025 and caused a phase shift in software engineering and closely related. The intelligence part suddenly feels quite a bit ahead of all the rest of it - integrations (tools, knowledge), the necessity for new organizational workflows, processes, diffusion more generally. 2026 is going to be a high energy year as the industry metabolizes the new capability.

English
0
0
2
103
Vajresh Balaji
Vajresh Balaji@bvajresh·
@IngsParty When it feels normal to regular people, then it shifts from niche to mainstream real yield territory.
English
1
0
1
35
Ingeun Kim
Ingeun Kim@IngsParty·
Delta-neutral is basically the kickoff for the 'Real Yield' era. The steady growth of Perp CEXs and DEXs is setting the stage for delta-neutral strategies to really take off. Doing everything manually is still a hassle with a high bar for entry. But once protocols automate and open this up to everyone, delta-neutral will easily be the hottest market out there.
xDFi@xd_protocol

x.com/i/article/2014…

English
1
0
8
270