[ML2] doom 2021 hacks
( Updated : October 23, 2021 )
🔥 DOWNLOAD LINK Links to an external site.
SensePost | Hacking doom for fun, health and ammo Doom Running on an IKEA Lamp [video] | Hacker News Doom | Hackaday
The Real Reason Doom Eternal Is Getting Rid Of Anti-Cheat Software Knee-Deep in the LED: Hackers Get Doom Running on Ikea Smart Bulb Posts navigation
Doom has been hacked to run on almost anything, and now, Ikea's Tradfri smart By Chaim Gartenberg@cgartenberg Jun 14, , pm EDT. I'll choose instead to be amazed that Doom-capable computers are now. By Joel Hruska on June 15, at am; Comments A group of hackers has gotten Doom up and running on an unusual platform: an Ikea smart bulb. Doom Eternal is a whole lot of fun, but fans have been divided by one number of hackers and unfair players, particularly for PC gamers. April 16, Porting DOOM to run on hardware never meant to run it is a tradition as old as time. Something like DOOM running within a bootloader. DOOM On A Desk Phone Is Just The Tip Of The Iceberg Then [Josh Max] wrote in to tell us about his adventures in hacking the CaptionCall, and now we're. By exploiting a bug in the way Doom 2 and Ultimate Doom load saved games, a programmer has found a new technique to run custom code inside. Our Blog В· (11) В· (30) В· (10) В· (14) В· (27) В· (22) В· (17) В· (15). Hackers are repurposing every conceivable device to play 'Doom,' a 25 year old PC game. 10/04/ By Keith Wagstaff. hackers to use telephone systems without paying for the service because they are such as the well-documented battle between the Legion of Doom (LOD) and.
We should all take some time to consider that our light bubs have more powerful computers than the first computer many of us once owned. This perspective makes scifi stuff like "smart dust" seem a lot more feasible. Ubiquitous computing, what will it bring us? Of course there also are market incentives for doing so time to market, dev costs etc Empirically it seems that software simply doesn't scale as well as hardware does. I feel like this overhead would make "smart dust" impractical. Or I guess I could put it that way: one hand hand you could be impressed that a modern light bulb can run Doom, on the other you could be alarmed that you need a Doom-capable computer to run a modern light bulb. I'll choose instead to be amazed that Doom-capable computers are now inexpensive and ubiquitous enough that it makes total financial sense to use one in a light bulb! More seriously, I see this argument all the time: that we are just squandering our advances in hardware by making comparably more inefficient software! Considering that efficiency used to be thought of on the level of minimizing drum rotations and such: the whole point is that we're now working at a much higher level of abstraction, and so we're able to build things that would not have been possible to build before. I for one am extremely grateful that I don't have to think about the speed of a drum rotating, or build web applications as spiders' nests of CGI scripts. Are there modern websites and applications that are needlessly bloated, slow, and inefficient? Certainly - but even those would have been impossible to build a few decades ago, and I think we shouldn't lose sight of that. Sohcahtoa82 3 months ago [—]. I dunno CPUs are getting faster, and yet paradoxically, performance is worse, especially in the world of web browsers. That's less than 18 clock cycles per pixel, and even that's assuming no CPU time spent on game logic. What's "low performance"? Humans measure tasks on human timescales. If you ask an embedded computer to do something, and it finishes doing that something in ms vs 10ms vs 1us, it literally doesn't matter which one of those timescales it happened on, because those are all below the threshold of human latency-awareness. If it isn't doing the thing a million times in a loop where we'd start to take notice of the speed at which it's doing it , why would anyone ever optimize anything past that threshold of human awareness? Also keep in mind that the smaller chips get, the more power-efficient they become; so it can actually cost less in terms of both wall-clock time and watt-hours consumed, to execute a billion instructions on a modern device, than it did to execute a thousand instructions on a s device. No matter how inefficient the software, hardware is just that good. One might liken this to DOS applications depending on DOS — you wouldn't consider this to be part of the app's working-set size, would you? Also, it supports things Windows 98 didn't anywhere, not just in its calculator , like runtime-dynamically-switchable numeric-format i18n, theming dark mode transition! IncRnd 3 months ago [—]. That's well and good - when your program is the only software running, such as an a dedicated SBC. You can carefully and completely manage the cycles in such a case. Very few people would claim software bloat doesn't otherwise affect people. Heck the software developers of that same embedded software wish their tools were faster. Hardware is amazing. Yet, software keeps eating all the hardware placed in front of it. I mean, I agree, but the argument here was specifically about whether you're "wasting" a powerful CPU by putting it in the role of an embedded microcontroller, if the powerful CPU is only 'needed' because of software bloat, and you could theoretically get away with a much-less-powerful microcontroller if you wrote lower-level, tighter code. So who cares? Apps running on multitasking OSes should indeed be more optimized — if nothing else, for the sake of being able to run more apps at once. But keep in mind that "embedded software engineer" and "application software engineer" are different disciplines. It's like demanding the same change of both civil and automotive engineers — there's almost nothing in common between their requirements. I think the other comment has a point though: these frameworks are definitely powerful, but they have no right to be as large as they actually are. Nowadays, we're blowing people's minds by showing 10x or x speedups in code by rewriting portions in lower-level languages; and we're still not even close to how optimized things used to be. I think the more amicable solution here is to just have higher standards. I might not have given up on Windows and UWP if it didn't have such a big overhead. My Windows PC would idle using 3 or 4 gigs of memory: my Linux box struggles to break 1. Have you tried to load UWP apps on a machine with less memory? I believe that part of what's going on there is framework-level shared, memory-pressure reclaimable caching. On a machine that doesn't have as much memory, the frameworks don't "use" as much memory. Really, it's better to not think of reclaimable memory as being "in use" at all. Windows 98 had far more advanced theming than anything out there today. Today's dark mode is a far cry from what used to be possible. We can expect in the future to see new levels of inefficiencies as hardware developments give us more to waste. Without something to balance this out, we should expect to see our text editors get more and more bloated in cool and innovative ways in the future. It makes me think of fuel efficiency standards in cars. I think I would be more willing to embrace this sort of tech if there computing resources were easily accessible to hack on. If I could easily upload my code to this smart bulb and leverage it either for creative or practical endeavors then I wouldn't necessarily consider it wasted potential. But here you have this bloated tech that you can't even easily leverage to your advantage. I do agree with the general point that the progress we've made over the past few decades is mind blowing, and we shouldn't forget how lucky we are to experience it first hand. We're at a key moment of the evolution of humankind, for better or worse. Well, yes and no… as always! The trick is that you probably don't need all, or even most, of that power to run the light. The big thing is that this chip is most likely so cheap, especially in bulk, that it doesn't make sense to get the "cheaper" variant, even if that was still available. No manufacturing process is perfect, so you just sort, label and price the output accordingly. This is fairly normal practice. LEDs, vegetables, even eggs I have to wonder why this is done. I know it must make sense or it wouldn't be done, I just don't understand it. If you're intentionally disabling functionality in order to sell at a lower cost, you're not actually saving any money because you still have to manufacture the thing. It also I assume opens up a risk to someone finding out how to "jailbreak" the extra cores and now you can buy an i7 for the price of an i3. Is the cost of having three different manufacturing processes so large that it's not worth switching? Is the extra revenue from having three different versions of the same physical chip enough to justify the jailbreak risk? MontyCarloHall 3 months ago [—]. If one of the cores is bad, it gets disabled and is sold as a series chip. If everything is in working order, then we've got a Specifically regarding Phenom II, I have a Black Edition still plugging away, serving different roles over the years, and I was able to successfully unlock and run the two locked-out cores via a BIOS option. It's never skipped a beat at stock clock. It could be that there was an oversupply of quad-cores, or perhaps since it was a Black Edition part marketed to overclockers the extra cores failed when overclocked. I know I wasn't able to have both overclock and four cores, but I considered the extra cores more important, since it was already a reasonably fast chip for its day. It's likely the latter that it couldn't work when overclocked with all cores. The market for those is to allow overclocking, so if it can't do any overclocking with all cores AMD likely wouldn't want to sell it as a 4-core Black Edition, since it'd probably just get returned. Hmm, but what if 3 cores are defective? If that can happen? I imagine there is some trade off to be made between increasingly surgical disabling of components and avoiding a menagerie of franken-SKUs. Presumably the fault rate is low enough that tolerating a single GPU core drop takes care of enough imperfect parts. Perhaps there is fault tolerance hidden elsewhere, e. Although this seems unlikely as it would probably waste more silicon than it would save. It gets sold as a coaster. Kind of funny that Amazon recommends it be bought with thermal paste. A coaster that allows you to play Doom. This way you increase the total yield-rate, and produce less waste. HeavyStorm 3 months ago [—]. Are you guys sure? I think manufacturing has nothing to do with it. The real reason IMHO, is to have a larger range of product prices so you can cater for specific audiences. It seems people are confusing cost with price. Those two things are orthogonal. Arrath 3 months ago [—]. This tends to be the case later on in a product's production run, as the manufacturer has fine tuned the process and worked out most of the kinks, the pass-rate of finished items increases. At this point, yes they may lock down perfectly good high end CPUs to a midrange model spec to meet a production quota. But do these production defects really meet the demand of the lower tiers? Also how is it possible to predict the number of defects in advance so that they can make useful promises to distributors? Well eventually as yields improve you start handicapping perfectly valid chips to main market segmentation. Sometimes lower-end SKUs are made by opening a higher-end one and cutting a wire or a trace. I'm curious about this as well. It seems inevitable that some batches will be "too good" to satisfy demand of low end chips. Either they just accept the fluctuations in order to maximize output of high end chips, or they would have to cripple fully functional ones to maintain a predictable supply. Interesting business. BeeOnRope 3 months ago [—]. It's not primarily about using defective chips but that's a nice side effect. As a process becomes mature, yield rates become very high and there wouldn't be enough defective chips to meet demand for a lower tier, so good chips are binned into those tiers anyway. The primary purpose is market segmentation: extracting value from customers who would pay more while not giving up sales to more price sensitive clients who nevertheless pay more than the marginal cost of production. That makes sense, thanks. I wonder if it would be possible to de-bin one of the lower end ones, assuming it is a binned version of a fully functional higher tier chip. The term for this is "binning", and the explanation is wholly innocent. Manufacturing silicon chips is not an exact process, and there will be some rate of defects. After manufacture, they test the individual components of their chips. These chips are designed in such a way that once they identify parts of a chip that are defective, they can disconnect that part of the chip and the others still work. I believe they physically cut stuff with lasers but my knowledge is out of date. This process can also includes "burning in" information on the chip itself, like setting bits in on-die ROMs, so that if your OS asks your CPU for its model number it can respond appropriately. Interesting side note: The same thing happens when manufacturing even basic electronic components like resistors. That is, none were above the rated value. So yeah, don't assume a perfect Gaussian distribution when using resistors. Peaches4Rent 3 months ago [—]. So the best chips which have the least errors in the manufacturing process are sold as top tier. The ones which have more mistakes in them get their defective parts disabled and then get sold as lower tier ones. It's called 'market segmentation'. It's why there are different brands of soap powder from the same manufacturer even though they are all essentially the same. I don't think it has anything to do with manufacture issues. HumblyTossed 3 months ago [—]. Instead of assuming, it's easy enough to confirm that CPU binning is real. I think of you Google price discrimination or similar economic terms you get some explanations for this. If you just have one price, you cut out people who can't afford it and people who can afford to pay more get away with more of the surplus. If you have several prices and create just enough difference in the product that it doesn't change the expense much, you can suck dry every class of user. Bit of an MBA trick. Rastonbury 3 months ago [—]. Exactly, it is not an economics trick for if companies could only supply a certain level of the market it might not even be profitable to produce. It is just economics We are talking about chips here so not the same bottle of soap with different labels, I'd call that a trick. So the RoI is just different. Browsers too. And they contain a compiler and almost an operating system ;-. Indeed, browsers are a good example of ubiquitous software. An ESP microcontroller can be bought in low quantity for less than a dollar. I means sure any cost reduction at scale is meaningful, but at that point I don't think the silicon is the expensive part at that point. It just doesn't make sense to give WIFI devices anything less than that performance, the gains in silicon space will be meaningless and you'll spend more managing that than anything. I'm tired of the bloated software take. Hardware is meant to be used. Without these abstractions most software would be practically impossible to create. Without software solving more problems what's the point of the hardware? Of course can good abstractions and tools help to make software possible that were practically impossible before. But there is also a tendency to add abstraction layers of questionable value. An Electron-based UI to copy a disk image to an usb stick comes to my mind, e. This is just a silly example, I know. But this happens all the time. That phenomenon isn't even new. Anybody, remember when the first end-user internet providers would all distribute their own software to dial in? In my experience, most problems with the internet access at that time could be fixed by getting rid of that software and entering the phone number and dial-in credentials in their corresponding Windows dialog windows. Questionable to you. Nobody is bloating dd There is definitely bloated software but it's not a huge issue. If it were, then the customer would care. If the customer cares the business would care. I dislike Nautilus too. N00bN00b 3 months ago [—]. That works both ways though. The highly qualified software devs did indeed squander some of it away. But I'm a rather bad dev that writes really inefficient code because it's not my primary concern, I'm not a programmer, I just need custom software that does the things I need done that can't be done by software other people write. All this overpowered hardware allows my code to work very well. I've been in situations where I could pick between "learn to program properly and optimize my code" or "throw more hardware at it" and throwing more hardware at it was definitely the faster and more efficient approach in my case. Yeah, doom advanced the state of the art by maybe a decade? Instead of needing a silicon graphics workstation or a Graphics accelerator, it allows a generation of kids to play on any hardware. If you want to know how the world would be like without video game programmers, just like at internal corporate software and how slow it is. If the market demanded greater efficiency eg. Through climate policy for example , we would quickly see a change in priorities. Dah00n 3 months ago [—]. So basically we need a climate tax on software to fix the problem. Putting the tax directly at energy would not cause much optimization in software in my opinion. Hardware engineers on the other hand can and often do take responsibility. All in all I don't have any hope of software developers being up to the task if it landed in their lap to fix this so if we wanted their hand forced the tax needs to be directly on software instead of hardware or energy. I don't believe this is mainly market driven as the market is unlikely to be able to fix it. It's at least as much a culture problem. I think of John Carmack as the software equivalent of Roger Bannister, he of the first 4 minute mile. Yes, he is absolutely incredible, but once that magic is revealed to the world, it shows what is achievable and people can follow in his footsteps. Of course, I'm over here with my poky 6 minute mile I tried running a Quake port in that vein, but sadly none of the computers I own were able to play it without stuttering. I doubt Python could get anywhere close to that. Narishma 3 months ago [—]. Software seems to scale in the other direction. The faster the hardware you give developers, the slower the software they produce becomes. Blikkentrekker 3 months ago [—]. They show text and pictures, accept form submissions and resize the viewport when I change the window size. Yet today they seem to require multi gigahertz CPUs to do anything acceptably fast. Yes there are a few that were not possible back then. Most though can't justify their resource consumption. Neither on the client nor on the server side. Excluding embedded video which indeed was very low quality then although it existed what things do you have in mind? In the scheme of things, this is short term. Market incentives for pushing performance are currently minor, but will have increasing influence over the next decade. Factors such As processing power hitting physical limits and energy prices as a result of climate policy will force engineers to build more efficient systems. I do not think we will see "processing power hitting physical limits" anytime soon. Moore's Law is not dead yet, it is a good question if it ever will be dead. As Jim Keller says, the only thing that is certain is number of people saying Moore's law is dead doubles every 18 months. What we're seeing now is massive parallelism, which means if your task is embarrassingly parallel, then Moore's Law is very much alive and well. Otherwise, no. Moore's law is a statement about transistor count on an integrated circuit - there is no "Moore's law for single threaded processing". And transistor counts continue to double. Yes, it has to die. Once we attain the limits of what we can do in 2 dimensions, we aren't that many exponential growth events from what we can achieve in 3. Or once we achieve the limits of silicon technology, we aren't that many exponential growth events from the limits of photonics or quantum or any other possible computing substrate. Unless we somehow unlock the secret of how to use things smaller than atoms to compute, and can keep shrinking those, we're not getting very much farther on a smooth exponential curve. Sure, it has to die eventually, but the key phrase is anytime soon. But do we have any evidence it will during our lifetime? Or even our great-great-great-great-great-great-grand-child's lifetime? Moore's law is dead as a doornail. Progress remains is parallel processing only. I feel this everytime I go to a checkout register or am on the phone with a rep doing some minor account thing. The person operating the console only has to click an option or fill in some text field, same as 20 years ago. But today, with added slowness. Knowing the amazing advances in hardware in the meantime, this hurts me. Smart dust will probably be terrible for human respiratory systems anyway. IgorPartola 3 months ago [—]. One of my favorite dad jokes is that if Smart Water was so smart, why did it get trapped in a bottle? That's assuming benign smart dust. Imagine a smart dust deployment by a hostile foreign power that is less than benign. My guess is, that in the future, we'll have computers writing some highly optimized software for certain things. That's my prediction. KMag 3 months ago [—]. Will the specifications for the software also be machine-generated? If the specifications are human-generated, then they're just a form of high-level source code, and your prediction boils down to future programming languages simultaneously improving programmer productivity and reducing resource usage. That's not a controversial prediction. If I understand you correctly, I think you're correct that over time, we'll see an increase at the abstraction level at which most programming is done. I think the effort put into making compilers better at optimizing will largely follow market demand, which is a bit harder to predict. The programs have 2 parts: a high-level description, and a set of program transformations that don't affect results, but make performance tradeoffs to tune the generated code for particular devices. The Halide site has links to some papers on applying machine learning to the tuning and optimization side of things. I can imagine a more general purpose language along these lines, maybe in the form of a bunch of declarative rules that are semantically though perhaps not syntactically Prolog-like, plus a bunch of transformations that are effectively very high-level optimization passes before the compiler ever starts looking at traditional inlining, code motion, etc. At some point, maybe most programmers will just be writing machine learning objective functions, but at present, we don't have good engineering practice for writing safe and reliable objective functions. Given some of the degenerate examples of machine learning generating out-of-the-box solutions with objective functions throwing pancakes to maximize the time before they hit the ground, tall robots that fall over to get their center of mass moving quickly, etc. Article yesterday says google uses AI to design chips in 6 hours, sounds like a long way is now yesterday. That headline is misleading. It's kind of like having a compiler will billions of optimization flags and using AI to select a pretty near-optimal set of flags for a particular human-generated source file. We wouldn't call the output an AI-designed program, even though such AI would be really helpful. It's an important step forward, and presumably a big time saver, but they're nowhere near giving the AI an instruction set spec or examples of inputs and outputs ad having the AI generate the logic. That is already done; such software is called a compiler. There is no reason to optimize the language that programmers work in when such optimizations can better be done on the generated machine code. By the end of the century, AI will probably be able to write software better than humans. This guy gets it. TheBigSalad 3 months ago [—]. Go back to using the software of the 80s then, before the evil software engineers made it all so bad. Ads, propaganda and surveillance. TheOtherHobbes 3 months ago [—]. It's a small step from those to compulsory "social credit" - like money, but worse - and other more or less overt forms of thought control and behaviour modification. I'm sorry you had to go through your childhood without Star Trek ;. There's an idea for a series: a space opera with a cutting edge analytics-enhanced Ferengi ship where "it's continuing mission is to explore strange new ad streams, to fleece out new life and new civilisations, to boldly trade where no one has gone before". The main adversary will be the cookie monster. I really did go without Star Trek! I had some small exposure to Star Wars, but what really grabbed my attention was the Neuromancer novel, and later the Matrix film series. Of course I'm cherry-picking my experience but it's a valid observation of yours that while I'm a technologist through and through, I often focus at its ugly side. Yeah I really enjoy optimistic sci-fi : I do enjoy Dark sci-fi also every now and then but I generally like my heroes to be scientists, explorers, to solve ethical questions. I like that too. I like it when the threat is external, and the people get together and collectively do something that makes the threat go away. I just lost the game. Off topic, but Deep Space 9 was not a federation starbase. It was a Bajorian Republic station under Federation administration following the Cardassian withdrawal. I stand corrected. SyzygistSix 3 months ago [—]. Sao Paulo did away with public advertising for a couple decades. I believe it is creeping back in now. But it can be done. Star Trek was always a spherical cow as far as futures go. I'm not saying that it's not useful as an inspiration socially, technologically and as emotional support , but realistically humans will pursue profit until we get a UBI system in place. Andrex 3 months ago [—]. Such a shame the current writers of the franchise apparently didn't, either. Which is depriving current and recent generations of kids of that optimistic ideal. Yes, Discovery season 3 is a thing I know about. Considering how much computing power is available and how much of it is used to deceive, misinform, or manipulate people, this sounds likely. The only thing more disturbing and sad about this is how much consumer demand there is and will be for deception, misinformation, and manipulation. I think we already have those, even without Ubiquitous computing. Hell, we had them even before we had any computing whatsoever. Sure they are more efficient now what isn't but they always existed Older ads were far less powerful. They also couldn't embed themselves into your everyday life so much. At one point, I creeped myself out by realising that my first instinct is to call out "Hey Google" when I need something, even if I'm away from my Google Home. I remember reading about some circuit where they replaced an timer whose job was to generate a PWM signal with a fully featured microcontroller, because it was cheaper that way. However, you want to stay safe these days, so it's better to keep up to date, right? Unfortunately, my experience has been that Ikea will push out a firmware update, and then I don't discover it until the outside lights fail to turn on or off at their appointed time and have to be rebooted. Yes, we live in an age when you can reboot a light bulb. If that ever happens, thus endeth my foray into smart lighting. I've put my foot down: if it needs Somebody Else's Computer to function, I don't want it. Nextgrid 3 months ago [—]. Ads, obviously. Totally a good trade-off, making all the world's computing adversarial so ads work slightly better. When adblocking software becomes indistinguishable from stealth technology. These are "smart bulbs". We are still in the. Lights don't need computers to operate, but that doesn't stop people trying to add "features" to lights with computers. Computers have lights, why shouldn't lights have computers? Nuclear power plants have phones, why shouldn't phones have nuclear power plants? Given how computation is more and more energy efficient and requires near zero material to build, will there be a day that we consider computing cycles a priori a bad thing? Maybe there will be an argument about how terrible it is to have smart dust by those who consider it to be a new form of pollution and toxicity. Thank you, I think I am just gonna quit my work now and spend the day thinking about it walking around and being anxious too. Sometimes it takes a while to understand how far we have come with miniaturization of tech. In IBM already demoed 1mm x 1mm sized computer on chip concept for crypto anchor that almost can almost run Doom. I believe there is a point at which we will be emulating silicon so fast that we will be able to realize true Single Instruction Set computing. Such a machine would be nowhere near as efficient as a CISC computer in terms of work per clock cycle, but what if our "silicon" in the future can run an almost arbitrary frequency? We would be able to emulate any instruction set natively in real time. The perfect FPGA. The future sucks. Angostura 3 months ago [—]. I always like the fact that the average musical birthday card, popular in the 90s had more compute capacity than the computer in the Apollo command module. Those ICs don't have any compute capacity at all. Mmmmh, AGC was not that low level. Eduard 3 months ago [—]. I doubt. These are around 20 MIPS. Sure, but I couldn't find a teardown. It seems likely they are using something similar. It's difficult to find a cheaper chip, broadly available chip these days. You can find Z80 clones, but even they are generally upgraded and therefore more powerful than the Apollo computer. Wow the downvotes on this are pretty harsh! Do people really think the Apollo computer is more powerful? And have any evidence? I'd be surprised if you can get a microcontroller with as little processing power these days. I think the downvotes come because you keep saying that music cards use microcontroles, when they do not. The cheapest, smallest 15s storage "chip corder" which is what these cards use has GPIO, SPI, the ability to trigger different messages etc. There's no way this is under 12, transistors! Citation please? That sounds unlikely! Reminds me of Ship of Theseus. Good work nevertheless! I think it's fair to say they're playing Doom on the lamp and even more impressively, it's not a whole lamp, but just a light bulb! They use an external keyboard, monitor, speakers, and storage for the game data, but the processor and RAM are from the original bulb. If someone said "I'm playing Doom on my PC" in , they would also have been using an external keyboard, monitor, and speakers. And the game would have shipped on external storage floppy disks. Before clicking I assumed the lamp had a small screen and they were using that screen I was hoping to see them running a 1-pixel version of doom on an RGB bulb. Aeronwen 3 months ago [—]. Was hoping they stuck the light behind a Nipkow Disk. I didn't really expect it to happen, but I still want to see it. Actually, the correct technical term for "lightbulb" is "lamp", and the correct term for "lamp" is "fixture" :. Maybe in your language, but looking at my English dictionary it clearly says a lamp is a device for jibing light consisting of a bulb, holder and shade. Historically a lamp would consist of the wick, oil and holder. From wikipedia Electric light : " In technical usage, a replaceable component that produces light from electricity is called a lamp. When you say "they ran doom on a lamp" that isn't a piece of scientific literature. It's just conversational English and as such using the common dictionary definition of the word lamp as opposed to a technical definition is entirely appropriate. I think that the goal of the project is not "Doom running on a Cortex M4" actually an 80 M The game is cheating a little bit, since it loads a lot of read-only data from external SPI flash memory, and all the code is in the internal 1MB flash. It also doesn't have quite all the features. No music, and no screen wipe effect I worked on a memory constrained Doom port myself, and that silly little effect is incredibly memory intensive since you need two full copies of the frame buffer. Overlays were a thing in DOS days, as well. Not for Doom specifically, but I've seen quite a few bit games using them. They're too slow for action games like Doom. Retric 3 months ago [—]. The actual lamp isn't It's just reusing a chip with a monitor. A playstation doesn't have a monitor, controller, loudspeaker etc. It's all external stuff you have to plug in before you can play. Still, we say "I'm playing Doom on my playstation". Those things are meant to plug right in. I've never had to solder together my own breakout board and carrier board to hook a Playstation up to a TV while breaking the Playstation's main features in the process. That lightbulb is completely disassembled and won't function as a lightbulb anymore. And nothing they added was plug-n-play. Edit: It's still a fun and cool project. SiempreViernes 3 months ago [—]. You also say you are "playing video games on my playstation" which doesn't make much technical sense, so clearly appeals to common idioms aren't without problems. In any case, the argument is that the mini console they built is no longer a lamp , not that you can't play games on a console. Maybe this could be a new take on "a modern pocket calculator has far more computing power than the moon landing systems in '69". A modern lavalamp has as much computing power as my mid 90s desktop PC. Thanks for the reference. If you can replace all the players, owners, coaches, logo, stadium, etc.. So the OP is not entirely wrong: they ported a game played on a The writeup explicitly says «we could trade off some computing power of the 80Mhz Cortex M33 to save memory». Still the original Doom features are all there, except multiplayer. They also restored some missing graphics features of the GBA port, like z-depth lighting. Yes 4MB vs kB is more impressive than k vs k, but cutting in half the memory requirements is still notewhorty. So a port of a port isn't a port? At a quarter the resolution. A corollary of Moores Law: the size of things you can run Doom on halves roughly every two years. Carmack's law? These are always awesome and I never stop being impressed by what folks can do. Here's my idea: Could we do this with a drone swarm? And have players still control DOOM guy somehow? I'm imagining sitting on the beach at night and someone is playing a level while everyone looks up at the sky at a massive "screen". DOOM as a Boltzmann brain might take a while before that's implemented, but I bet it'll happen eventually. TheCraiggers 3 months ago [—]. Just because pigeons can deliver messages doesn't mean it's turing complete. Although they could be used in data transfer. Game of Life is totally Turing Complete though, so it's already proven that you can indeed run Doom on it. It would be hard to prove that it hasn't already. Therefore you need is powerfull 8-bit color drones and someone to design good edge representation. Just dangle a string of 50 LED's below each drone. Close enough! Someone will do it, I'm sure. Now, to figure out how to pump in the soundtrack What about a swarm of pigeons? PhasmaFelis 3 months ago [—]. Years ago, I saw someone implemented a vector display using a powerful visible-light laser on a gimbal, instead of an electron gun with magnetic deflection. Then they used it to play Tetris on passing clouds. Game boy color 8-bit cpu, 2MHz, 40k ram. Supposedly the guy Palmer who created the commercial gba version had done a tech demo for gbc, but Carmack decided it was too stripped down and proposed a commander keen port for gbc at the time instead. Gba came out a couple of years later and was powerful enough. Or on a building. See project Blinkenlights from the early not Doom, but still video games. Too easy. At first I was imagining some amazing drone dance, but then I realized that it would be just a wireless screen with horrible battery endurance. They've been playing Tetris and Snake on some weird things already I've seen Tetris on a high rise, and Snake on a christmas tree. SonicScrub 3 months ago [—]. Are there some crazy patent fees? This is at least a common heuristic that I use when designing hardware. These numbers change depending on volume, and how many zeros the final price has. Literally the same model. I guess it's mostly shipping, import duties, taxes, marketplace fees, free shipping to customer, returns handling and profit margin. Considering that you can pickup free portable chargers at trade shows, they must cost next to nothing to source. The LED adds a little bit more to the price but again not much. The whole package is very cheap wholesale and can probably be marked with whatever brand you want if you ask an Alibaba vendor. The store has already bought them from some company that bought them from China so there's already a slight markup there and then the store adds a little more. They know people will buy them and depending on what kind of store it is, can charge a little more if they know their customers well. A Walmart-like store isn't going to be able to sell them for too much above cost but a specialty bike shop can mark them up more since their customers are already spending higher prices. A specialty bike shop might even order them with custom branding, adding a little more to the final price. As for something like the Ikea bulb in the article, it includes an RF module that isn't that cheap. Ikea does win out compared to other store for things like this because Ikea is buying the units from themself. The Ikea Sonos speaker is the only thing I've ever seen there that wasn't a pure Ikea product. They really have mastered horizontal and vertical integration. There's a whole theory behind pricing. Low volume products need high margins to be worthwhile. Ekaros 3 months ago [—]. Because they can charge that much? How could it be otherwise? That a very powerful lightbulb! It has more CPU perf than my first computer, and costs times less at the same time. The progress of semiconductor industry is fantastical. Perhaps the most egregious of these was TeX running on a Cray supercomputer. Time on these machines was metered heavily. I can't imagine anyone actually used it for formatting papers. NaturalPhallacy 3 months ago [—]. I've touched them with my own hands. And this was in money. Another guy got Doom running on potatoes, so I'd say yes. Both the video and the write-up appear to be gone? Did IKEA complain? So, why does this device need such processing power? Can this really be cost effective? The system-on-chip is the MGML which needed to be powerful enough to run multiple wireless protocols Zigbee, Thread, Bluetooth , so the lamp can be controlled by any of these. These are very complex protocols. The Thread spec for example is hundreds of pages. I did a formal security review of it on behalf of Google back in Bluetooth is even more complex. RF signal processing, packet structure, state machine, crypto, logical services, etc. The software complexity of these protocols is greater than the complexity of a rudimentary 3D game like Doom, so it's expected that whatever chip can run these protocols can also run Doom. Protocols, particularly those used for security should be a simple as possible. I know it's a hard problem and people tend to use what is available rather than solving the hard problem. I would have thought they'd have specialized chip hardware already to deal with these, but maybe I don't know enough about that kinda thing. Baking something like that into hardware is probably not a good idea because then you can't update it when vulnerabilities are found. But do you get software update and deployment to your device when vulnerabilities are found? My Hue lamps had their firmware updated through the app a couple of times, and I had them for years. Makes me smile to think that one day my bank account will be drained of all its BTC because I forgot to patch my bedside lamp last tuesday That would make me frown. Just saying. It's already there. However, this is all off-the-shelf ware and as long as the energy consumption is not a problem, I am fine. It's burst processing. You do actually need high processing speeds for short periods of time to implement network protocols like these effectively. Think cryptography, reliability, etc. It sounds counterintuitive, but getting the job done quickly so you can go back to low power mode sooner actually saves power, even if the instantaneous power draw while processing is higher. Nearly all cryptography on MCUs is hardware based. It would have otherwise be completely impossible to do timing sensitive crypto work like WiFi, or BT. Symmetric cryptography is often hardware based, but asymmetric crypto rarely is. And this also assured it being dead on arrival. Actually Thread is alive and kicking! It has a lot of momentum at the moment. It is not alive. I haven't seen a single device working with it, while I at least seen one Apple Home compatible device in an Apple store. Tuya on other hand is just in every retail outlet, it's just people don't know that Tuya is under the bonnet. Maybe not in your country? Don't get me wrong: we are still very early in real-world deployments. I am not lying, the assortment of Thread based devices ever seeing sale is much smaller than the size of Thread's illustrious board of directors. They literally have more talkshops per months than devices released. That just means little more than "who bought or used the spec at some point. By today's standard Doom doesn't need much processing power. You could probably find exactly the right chip that only has exactly just as many bits of RAM as you need for the lamp's functionality. But that would probably be more expensive to develop and chip than just using standard parts? Even more radical: I suspect with a bit of cleverness you could probably do everything the lamp needs to do with some relays and transistors. But today, that would be more expensive. You could do something very basic with discrete components for controlling wireless lighting systems but system starts to get out of hand when you need to have a bunch of lights nearby. It's much cheaper, simpler, and smaller to reduce it down to a chip and move to a digital RF system. I've got a bunch of RF controlled outlets in my house but it's just about the dumbest system you can buy. It's on par with the typical remote garage door opener. I'd like to be able to remotely control them away from home or be able to give each light or outlet its own schedule and that requires either a central controller or each device having network access for network time and remote control. Interestingly, a friend rented a house in college once that had a system of low voltage light switches that ran back to a cabinet filled with relays that controlled light switches and outlets. No major benefit to the user other than a control panel in the master bedroom that lets you control the exterior and some interior lights. It was a neat system but definitely outdated. I'd imagine a retrofit would be to drop all of the relays for solid state and provide a networked controller to monitor status and provide remote control. I should have been less sloppy: you'd have to rethink what the lamp needs to be able to do slightly, too. It doesn't need it, its just that chips that can run doom are the dirt cheap bottom tier chips now. Rather than making some custom chip only just powerful enough to run the lamp software. You may as well just stick a generic one in. Chips that can run Doom are nowhere near the dirt cheap bottom tier. Chips that can run Doom, though, are just about at the low end for internet-connected devices. The chip in the bulb is exactly in the right ballpark for the application. You do need a fairly beefy chip to run multiple network protocols efficiently. There is no network stack on the ikea bulbs. They only support local communication via zigbee. ZigBee is a network protocol with a network stack. It has addressing, routing and routing tables, fragmentation and reassembly, discovery, error detection, packet retransmission, and everything else you'd expect from a functional network protocol. Karliss 3 months ago [—]. It is a IoT thingy with wireless connection which puts it in category where certain factors combine. So it being IoT already means that target audience doesn't care about the price that much. For product designer it makes sense using a ready module because it avoids need to do RF design and the certification. It also simplifies the design if you run your user application inside the wireless module instead of having an additional MCU communicating with wireless module. Such modules tend to have medium to high end MCUs. Why do wireless modules need such processing power? Wireless certification is a pain, so there will be less product variants for wireless modules compared to something like 8bit micro controller which come in wide variety of memory, size and IO configurations. If you have single product variant better have slightly more powerful MCU part making it suitable for wider range of applications. Might as well split the resource budget on the same order of magnitude for wireless part and user application. N kB of memory for wireless and N kB for user application instead of N and 0. Similarly there are basic speed requirements for digital logic buffering the bits from the analog RF part and doing the checksums. The processing power needed to run a decent internet connected device with typical software stacks these days is about the same as the processing power needed to run Doom. Because mass produced microcontrollers are dirt dirt cheap. Plus how else will malware people run their stuff? The ikea bulbs are actually pretty good malware wise. The bulbs do not connect to the internet, they use zigbee to communicate with a remote. Which can either be a dumb offline remote or the gateway device. If you had to design an IoT bulb, this is the ideal setup. In other words, it does connect to the internet, it also sits on the LAN to give attackers access to all your other devices, AND it sits on Zigbee to give attackers access to those as well. No, it can be commanded from the Internet - big difference. It never has a direct connection to the Internet and even that is entirely optional and a hard opt-in buying more hardware. And if you have attackers on your LAN, you're at the point where controlling your lightbulb is the least of your problems. As for Zigbee, go on, present your alternative - I'm all ears! As far as I know only Apple does the local network stuff. If a device is Alexa or Google Home compatible, it talks directly to some cloud service from the manufacturer on the Internet which then talks to Google or Amazon. We're still talking about the light bulbs, aren't we? How that gateway connects to the Internet is up to it - but at no point is either the lightbulb or its LAN gateway in any way connected to the Internet. Therefore, neither the bulb nor the gateway pose a direct security or privacy risk. All the security is offloaded to the gateway and you are entirely free to chose the vendor of your Internet gateway or indeed opt for none at all and possibly use a VPN if external access is desired. That's how it works. If you think otherwise, please link to the Google Home docs that explain how that could possibly work, because I can't find them. Here's how you hook up Home Assistant to Google cloud. As you can see, turning it into a cloud service from Google's POV is required. You can either use Home Assistant Cloud see? The commands flow from Google Home devices, to Google's cloud, to the vendor's cloud, to the vendor's devices. If that sounds stupid, congrats, this is why they call it the internet of shit. IoT devices being smart appliances. Not Home Hub Apple device. I'm not sure why you think that having a proxy in the middle will protect you. See, I have in fact set this up in the past, although not with IKEA lamps, but some other cheap Zigbee-compatible ones. The HomeAssistant instance was configured with a dev account to work with Google's crap, but the devices themselves only ever talked to it, not Google or any vendor-provided server. Now, that same physical box might also have the capability to work as a Zigbee-Alexa or Zigbee-Google transaltor, which would require a vendor server as you said, but those options are, well, optional. Of course, if you set up Home Assistant you can firewall them off the internet. You're replacing the vendor's cloud service with Home Assistant, effectively. For example, I had to work out that in order to get Broadlink devices to stop rebooting every 3 minutes because they can't contact their cloud crap you have to broadcast a keepalive message on the LAN it normally comes from their cloud connection, but their message handler also accepts it locally, and thankfully that's enough to reset the watchdog. This involved decompiling their firmware. I think that patch finally got merged into Home Assistant recently. My point is that this is not the intended use for these devices. Normal people are going to put the gateways on the internet and enable the Google integration; in fact, it's quite likely that they will sign in to some IKEA cloud service as soon as you put the gateways on a network with outgoing internet connectivity, even before you enable the integration. As far as I can tell. IKEA has no servers or infrastructure for their devices. That's how those systems work. Only Apple offers direct local connectivity as far as I know. These days Google Home has local fulfillment, but that seems to only be offered as an addition to cloud fulfillment. It always has a cloud fallback path. There is a bypass path these days for local access, but it is always in addition to the cloud path, and only an optimization. I don't know how the IKEA hardware works. However it is not true that Alexa has to talk to a cloud service to integrate with all IoT devices. The echo discovers the WeMo devices via upnp. This has been a thing for quite a while[0]. I believe you are correct that Google Home has no local network device control. The IoT device can certainly work like that. The comment is specifically talking about Google Assistant support, which as HomeAssistant users have experienced, does require cloud server access even if this seems unnecessary in cases when the devices are only being controlled within a local network. Big difference from what? You do realize that the vast majority of remotely exploitable security vulnerabilities are in software which can be commanded from the Internet , right? I'm quite certain that it's much harder to exploit something that you can't even send a TCP packet to. You could exploit the cloud service directly and gain control of the device, but that's like stealing the security guard's master keys - you can't call that a vulnerability in the door lock, can you? Why on earth do you think you can't send a packet to the device? How do you think the cloud service communicates with it?! There are 2 layers of relay devices in between. And the only one with direct internet access is a device you already have on the internet and is developed by the top brains and maintained for many years to come unlike your average smart bulb directly hooked to the internet with minimal security. If you buy the dumb remote, you get a useful smart light setup with no internet or even local network connectivity. Its useful because you can turn a room full of lamps on at once or adjust their color. Oh boy, it sure is impossible to exploit something through a proxy in a training diaper! Hacker News new past comments ask show jobs submit. Sohcahtoa82 3 months ago [—] I dunno IncRnd 3 months ago [—] That's well and good - when your program is the only software running, such as an a dedicated SBC. HeavyStorm 3 months ago [—] Are you guys sure? Arrath 3 months ago [—] This tends to be the case later on in a product's production run, as the manufacturer has fine tuned the process and worked out most of the kinks, the pass-rate of finished items increases. BeeOnRope 3 months ago [—] It's not primarily about using defective chips but that's a nice side effect. HeavyStorm 3 months ago [—] Yep. Rastonbury 3 months ago [—] Exactly, it is not an economics trick for if companies could only supply a certain level of the market it might not even be profitable to produce. Dah00n 3 months ago [—] So basically we need a climate tax on software to fix the problem. Narishma 3 months ago [—] Software seems to scale in the other direction. Sohcahtoa82 3 months ago [—] Eh KMag 3 months ago [—] Will the specifications for the software also be machine-generated? Blikkentrekker 3 months ago [—] That is already done; such software is called a compiler. TheBigSalad 3 months ago [—] Go back to using the software of the 80s then, before the evil software engineers made it all so bad. TheOtherHobbes 3 months ago [—] Optimistic. Sohcahtoa82 3 months ago [—] I just lost the game. SyzygistSix 3 months ago [—] Sao Paulo did away with public advertising for a couple decades. Andrex 3 months ago [—] Such a shame the current writers of the franchise apparently didn't, either. SyzygistSix 3 months ago [—] Considering how much computing power is available and how much of it is used to deceive, misinform, or manipulate people, this sounds likely. SyzygistSix 3 months ago [—] When adblocking software becomes indistinguishable from stealth technology. Angostura 3 months ago [—] I always like the fact that the average musical birthday card, popular in the 90s had more compute capacity than the computer in the Apollo command module. Eduard 3 months ago [—] I doubt. Aeronwen 3 months ago [—] Was hoping they stuck the light behind a Nipkow Disk. Narishma 3 months ago [—] They're too slow for action games like Doom. SiempreViernes 3 months ago [—] You also say you are "playing video games on my playstation" which doesn't make much technical sense, so clearly appeals to common idioms aren't without problems. Narishma 3 months ago [—] At a quarter the resolution. TheCraiggers 3 months ago [—] Just because pigeons can deliver messages doesn't mean it's turing complete. TheOtherHobbes 3 months ago [—] It would be hard to prove that it hasn't already. PhasmaFelis 3 months ago [—] Years ago, I saw someone implemented a vector display using a powerful visible-light laser on a gimbal, instead of an electron gun with magnetic deflection. Ekaros 3 months ago [—] Because they can charge that much? Karliss 3 months ago [—] It is a IoT thingy with wireless connection which puts it in category where certain factors combine.