I have been using stremio for the last couple of days but when it starts, it activates my Nvidia GPU. Even though I force it to use internal Intel GPU in both windows graphics settings and Nvidia center, it seems those settings doesn't have any effect on stremio. Is this a known thing? Some people says stremio is using GPU for bitcoin mining. I also suspect this cause my GPU usage is around %35 when just using stremio. I am sorry if it's been asked before, I haven't found any answer on the internet, though
[HELP] DWM High GPU Usage Even at Power-Saver Plan (No Transparency and Desktop Background, No Background Apps)
Hi all, I hope someone can help regarding Windows Desktop Manager's high GPU usage. PC is a prebuild Asus Z240 AIO so I don't have any part customization: https://www.asus.com/All-in-One-PCs/Zen-AiO-Pro-Z240IC/ I logged in and got a "Preparing your desktop" welcome greeting -- apparently, the PC got updated to version 2004. Upon getting into the desktop, it felt really sluggish and upon checking the Task Manager, DWM is mining bitcoin using a constant 30% of my GPU (see pic 1 & 2). This is at power-saver plan. At normal and high-perf, GPU usage reaches 50%. What I did so far:
Disabled startup items (there's not much except for Flexnet license services). Restarted again.
Edited performance options (see pic 3). Removed transparencies, animations, etc.
Removed my desktop background image and set a solid color instead.
chkdsk c: /f
sfc /scannow (twice for good measure)
dism /online /cleanup image /restorehealth (twice again for good measure)
Switched to power saver plan to check if GPU usage would decrease --yes it did... from 50% to 30%.
Downloaded DDU and booted to safe mode -- removed NVIDIA drivers first (no restart after removal) then INTEL drivers (restart after remove)
Upon restarting, Windows redownloaded both INTEL and NVIDIA drivers from Windows Update and prompted for a restart
Mistyped the title... This is going to be a simple guide to help any R1 owner upgrade and optimize their Alpha.
(In order of importance) Storage Unit: HDD OUT SSD IN This is by far the easiest upgrade to make and the most effective. https://www.newegg.com/p/pl?N=100011693%20600038463 Any of those will work, just needs to be 2.5 Inch SATA. How to Replace Video WIFI Card: This is like a 5-15$ upgrade. Go find any Intel 7265ngw off eBay and replace it with your current WIFI card. If you don’t want to buy used then here. How to Replace Video RAM: Ram prices have tanked because of bitcoin mining, so this has become quite a cheap upgrade as well. I’d recommend 16GB just because why not, but if your tight on cash 8GB is fine. https://www.newegg.com/p/pl?N=100007609%20601190332%20601342186%20600000401&Order=BESTMATCH How to Replace Video CPU: This required the most research. I’d recommend you look through this first. The wattage of the processor slot only ranges from 35w-50w according to a developer of the Alpha (Source). The socket type is LGA 1150. If you’re going cheap, the i5-4590t (35w) and i5-4690s (65w) are both great options. i5-4590t i5-4690s The i5-4690t (45w) is also great but is hard to find from a trustworthy source for a reasonable price. If your willing to spend $100+ then easily the i7-4790t (45w). That is probably the best processor to put in the Alpha. All 45w will be used giving you 3.9 GHz Turbo. The T series apparently runs the best on the R1 according to This Reddit post. How to Replace Video GPU: Coming Soon! Maxed out Alpha R1 specs: i7-4790t, 1TB Samsung SSD, 16GB DDR3, Nvidia Geforce GTX 860m. (Upgrading to anything better then that is pointless)
Optimizing the Alpha R1
1st Completely wipe the computer
Just a good place to start, gets rid of Hivemind and other aging programs.
Anything in my current, fairly old (but water-cooled!), PC worth using in a new one?
I started building computers around the year 2000 and have never really done a complete build from scratch (for myself) after my first. I'd upgrade a part here and there, and over time everything has been replaced multiple times. However, I'm thinking, due to an upgrade hiatus (it took me a LONG time to "beat" Skyrim :-P), I'm at the end of the road. I'm close to the conclusion that, for the second time in my life, it makes sense for a fresh new build. I figure I'd run this past y'all first. My next computer I'll use for both fun and work. On the fun side, it would ideally play modern games (Particularly, I'm eying Elder Scrolls VI and Baldurs Gate III) on decent settings on my 34" widescreen monitor. Work-wise, it needs to be able to run multiple docker containers and let me do other things (take notes in notion, google docs, etc.) while on a CPU-crushing video call. The budget is $1,500. Here is my current setup and thoughts on each component: Photos:https://imgur.com/a/xyM07dx Things that may be useful: Operating System: Windows 10 Professional (from upgrading from Windows 7... the DVD is hopefully somewhere) PSU: Corsair TX850W - It has been trusty for the last eight years, but may not have the needed connectors for today's stuff. Hard Drive:Crucial MX100 512 GB SATA SSD - 2.5-Inch, No performance complaints (specs claim 6.0 Gb/s), although I'm running out of storage space. Optical Drive: Pioneer DVD-RW - Do people still put these in new computers? I also have an external USB DVD drive I could use in a pinch. Case:Chieftec Dragon Mid Tower - this old case is steel and heavy as shit, which is actually nice as my dogs and toddlers are unlikely to knock it over inadvertently. It has a window which I like, although cable management is a massive pain in the ass. I'm not too fond of the door that covers the buttons and optical drive and lost it long ago. Cooling:Custom water cooling setup - I water-cooled in 2002, overclocking my Athlon XP 1700+ from 1.4Ghz to 2.5. It was awesome. The radiator and T-valve are the original gangsters. I'm on my fifth pump, with my last three being the Swiftech MCP655-B, which I like. The current water block is some D-Tek for the old CPU socket. The radiator is an old Chevy Impala radiator (I think) that this guy I met on a 3DMark (now Futuremark) forum (jb2cool?) custom modified and made a shroud that houses two 120mm fans. I had to drill the shit out of my case to mount this thing in there. I'm very nostalgic about this setup, but it would also be a huge pain to fit into a new case. Monitor:LG 34UM67-P 34 - 34" IPS widescreen; 5ms 2560 x 1080 60hz; is 60hz too slow these days? Keyboard and mouse: Logitech Chordless Wave - USB dongle; wrists feel ok, no complaints Things that probably will not be useful: Motherboard:Gigabyte P45T-ES3G - I'm pretty sure I won't be reusing this. I also bought it to replace a more bad-ass motherboard that died when my previous power supply died and took it out with it. I do like how it had dual bios, though. CPU: Intel Core 2 Quad Q6600 - Been impressed with this CPU lasting as long as it has. I wet sanded it down to a mirror finish ready to overclock the shit out of it, but then never got to it as life got in the way. Memory: 4x4GB PC3-12800 DDR3 - G.Skill Ripjaws; ancient technology. Note: I want more than 16GB ram in my next build. GPU:Asus Geforce GTX 460 - My previous GTX 460 died at the height of bitcoin, and any modern GPU was stupidly expensive. Replacing mine was only $30 on eBay, so that's the route I went. tl;dr: are any of the above bolded components still worthwhile in a modern PC build?
Hi everyone, a quick intro here: I come from a professional horticulture background. I've been learning about computers, networking, network security and Linux sys. admin for the last two years. I built a bunch of gaming computers for my kids and I with a bonus check I got in fall of 2017, right before the 2017 "bitcoin bubble". By luck I grabbed all my parts before the price of GPU's skyrocketed. All I've been doing though is learning about Linux and game development, learning digital art like 3D modeling, and streaming video games. I'm now learning to mine ZEC with tpruvot/ccminer 2.3.1 in Ubuntu 20.04 with Nvidia proprietary driver vers. 440 & CUDA toolkit 10.1. I'm just learning how to do this and understand I'm not making a profit. I'ts more a learning experience and a hobby sort of thing for now. I dont really care if the system breaks, I have another computer with AMD RX560 that I work and game on Linux with. I cant mine with the pollaris GPU because I cant install OpenCL. There is no support for 20.04 from catalyst driver as of now. TL;DR I'm a noob and wondering why my hashrate is what it is. I am only using 1 GPU as of now (Nvidia 1050Ti 4GB) and mining on a pool. I get an average of 140 Sol/s. Is this essentially the same as H/s and is that a normal number for my card? Should I add a 2nd GPU I have if it's only a 1050 2GB? Also, I am using nvtop & htop packages to monitor PC stats, it shows it's using 99% of GPU and 100% of a single core of my CPU (intel i5 6402P @ 3.2GHz) fans and temps are good. But it shows I'm only using .6GB / 4GB while mining, is that right? Shouldn't it be using more memory? Would it be overkill to mine with CPU miner at the same time as the 2 cards? Sorry about the essay, and thanks for your time
Hello everyone, Im quite new here so hope I get this right. I have Pc mostly for gaming, maybe 3 years old, cost me about 800€ then, and I considered it to be mid-tier back then. It has: Intel(R) Core(TM) i5-6400 CPU @ 2.70 GhZ 12 GB RAM 64 Bit, Windows 10 Nvidia Geforce GT 730 So I actually have several questions but Im gonna try and be compact. The most pressing is that I bought Read Dead Redemption 2 thinking that it would run at least kind of ok on low settings. It doesn't. around 10 fps is the best I can get with everything on low and its obviously unplayable. Also it seems that lowering settings doesn't realy decrease the graphics, as does the ping. So I get the same ping from low settings or high (roughly) and things still look very shiny on low graphics. I really don't know too much about these topics so I hope I dont ridiculed too badly - my friend told me I propably have a "bitcoin mining thing" thats draining my CPU/GPU. Is this possible/realistic? (Sorry if its a dump question) The Big question I guess is, how would I start upgrading this PC to make it more viable? Thanks to everyone in advance! Cheers
AMD Threadripper 3970x - My adventure so far (Will edit as I go)
I own a gaming community called Unknown Skies gaming (For a little Context) and we are planning a server upgrade. Currently we have a Dell R620 with Dual E5-2690 V2's Running @ 3.3Ghz, has 196GB Ram. The game we host on this rig is called Empyrion Galactic Survival. The reason for the upgrade is currently the server just isnt high end enough to continue supporting our target playerbase of 100 players, and Performance has dipped due to the game being alpha and the devs are not doing much to improve server side efficiency. Back in 7.0 we could host 150 players on this machine, now it struggles with 60... Yeah big change in 2 years. After a lot of research... I can clearly see Intel Xeon Gold 6154 CPUs (2 of them) do not outperform even the R9 3950x according to passmark. The price of these used intel servers is stunningly high on the used market... Dual 18 Core XG 6154 Processors with 128gb ram (Dell R740 I think, feel free to correct me) cost right around 10,000$ USD Used give or take 1000$ or so... And get stomped on by an 800$ Single Socket CPU... Thats new and has a warranty... I get that it doesnt support as much memory... But Dang... So I looked into threadripper... HOLY SHAT!! The 3970x hits 60k+ in passmark... And the ENTIRE system with a custom cooling loop is less than 6 Grand? Thats nuts to me! Amazing Deal! Im sad all my servers and rigs are intel hahahahaha. What we need, What im aiming for, and my thought process - Feel free to leave thoughts and suggestions.. AMD Threadripper 3970x (Because High Per thread throughput matters due to shit code, and more players on a playfield means more cpu usage on that thread, Lots of threads matter as every playfield opens a new PlayfieldServer.exe and is its own instance so the more spread out players are, the more cores you need to spread out workload. 32 cores 64 threads... NICE) Corsair Vengeance LPX 256 GB (8 x 32 GB) DDR4-3200 Memory (Each player uses around 1-3gb ram, 2 on average, goal was 100 Players) ASRock TRX40 TAICHI ATX sTRX4 Motherboard (Thought this was a solid selection given the options I saw) EVGA 1000 W 80+ Gold Certified Semi-modular ATX Power Supply SHOULD be sufficient with a closed loop water cooler, all SSD's and not requiring a gaming GPU (Plan on a 1050ti / 1050 at most) Case im using is a modified 4U rackmount case with 3x 120mm fans going in the front, radiator behind those. Could NOT find one that I didnt have to modify. Went with El-cheapo option cause imma be drilling it and modding it internally - RosewillServerChassis Server Case Rackmount Case for BitcoinMining4U off ebay I think it will work, and if not I have a file server that would LOVE the home LMAO. So it was well worth the Chance. Water cooling Solution I settled on was a Thermaltake Floe TR4 Edition. I thought about using a Custom Cooling loop, but by the time I priced everything out the way I would do it... It came out to like 500$ ish LMAO. The above cooler From what I can tell should be sufficient, and should fit in the case i chose. Tight fit, but should fit. Im excitedly looking forward to the build :) Anyone have any experience with any of this and care to chime in?
OSRS currently uses a CPU renderer straight out of 2003
It's really REALLY bad! At least, by modern standards. It could not be more opposite to what modern computers pursue. It's not Jagex's fault, it's just old... Very VERY old! It's a huge undertaking, and Jagex has been too busy knocking mobile absolutely out of the park, and I'd do the same if I were them - so don't think this is some kind of rag on Jagex. Anyways, some may be surprised that this renderer is still managing to hurt computers today. How can software first written in 2003-2004 (FOR COMPUTERS OF THAT ERA) be laggy and stuttery on computers today? The answer is simple: resizable mode, and individual CPU core speed. Resizable mode takes a game window that used to be 765x503 (the majority of which used to be a fixed GUI canvas, but not with the new mode!) and renders it at resolutions as high as 3840x2160, maybe even higher. Do you know how many pixels that is? Over 8 million. Do you know how many pixels the original renderer was designed to expect? Just under 390,000. That's over 21x the work being thrown at modern CPUs. Cores aren't anywhere near 21x faster than they were at the close of the single-core era, which is why players with 4k monitors need to see therapists after long play sessions. Surely CPUs have gotten faster since the mid 2000s! They have, but not quite in the way that a single-threaded(single core) CPU renderer would expect... CPU manufacturers have been focusing on power draw, temperatures, core count, and special architectural improvements like GPU integration and controller integration. Comparatively, improving individual core speed hasn't been as much of a focus as it had been prior to the multi-core era -and no, I'm not talking about the useless gigahertz(TM) meme measurement, I'm talking about actual overall work done by the core. As a result, the CPUs we have today have developed down a much different path than what this CPU renderer would benefit from. Not nearly the amount that resizable mode demands. Especially considering these CPU cores were designed to assume that things didn't pile all their work onto just one core. We're throwing over 21x the work at CPUs that, in most cases, have only been getting 5-15% faster per-core performance every year.
What is a "frame"?
Think of a frame as a painting. Your GPU renderer (or CPU cough cough) is responsible for using your GPU to paint an empty canvas, and turn it into a beautiful and complete picture. First, it draws the skybox(if there is one, it's gonna just fill with black in the case of OSRS). Then, it draws all the visible geometry from back to front, with all the lighting and effects. Then, it draws the GUI elements over the top. It does everything, one pixel at a time. Its job is to draw these paintings as quickly as possible (ideally, so you perceive movement) and present them to your monitor, one at a time, forever... until you close the game. Think of a GPU renderer as a talented artist with hundreds of arms (GPU cores). If your GPU is able to paint this picture in 16.6 milliseconds (frame time measurements are always in milliseconds), then you'll have a frame rate of 60 frames per second, as 1000 ms / 16.6 is 60. Sometimes your renderer struggles, though. Sometimes it can only complete a frame in 100 milliseconds (10FPS). You can't wave a magic want when this happens. If you want a higher framerate, you need to either update your hardware, or change your software. By change software, I mean either make it more efficient at the work it's told to do, or give it less work. RuneLite has done the former. An example of the latter would be lowering resolution, turning graphical details down, turning off filtering, etc. Games usually call this set of controls the "Graphics settings". Luckily, OSRS is so lightweight it will likely never need a graphics settings menu. (Think of a CPU renderer as a painter with no artistic ability and, in the case of quad core, four arms...but he's only allowed to paint with one, while the other 3 sit idle. Also, he has to constantly stop painting to return to his normal duties! No fun! The CPU is better off at its own desk, letting the GPU handle the painting.)
A GPU renderer improves frame rates
Not that this matters currently, as the game is capped at 50FPS anyways... but it's still going to be huge for low-end systems or high-end systems with high res monitors. There's also the future, though... Once a GPU renderer is out, it could be possible that they could someday uncap the framerate (which, according to mod atlas, is only the character's camera as all animations are 2FPS anyways). I expect that an update like this will make fixed mode a solid 50FPS on literally everything capable of executing the game. Fixed mode was already easy to run on everything except for old netbooks and Windows Vista desktops, so this really wouldn't be a surprise.
A GPU renderer improves frame times
Frame times are just as important as frame rates. Your frame rate is how many frames are drawn over the course of a second. But, as described previously, each "painting" is done individually. Sometimes the painter takes longer to do something! What if there's a glowing projectile flying past the camera, or something else momentary that's intensive? The painter has to take the time to paint that, resulting in a handful of frames over the course of that second taking much more time than the others. When your frame rate is high and frame times are consistent, this is perceived as incredibly smooth motion. Ideally, all of our frames are completed in the same amount of time, but this isn't the case. Sometimes "distractions" will come up, and cause the painter to devote an extra 10-20ms to it before returning to the rest of the painting. In bad scenarios, this actually becomes visible, and is referred to as micro stutter. Having a dedicated GPU renderer doing the work ensures this is very uncommon. A GPU has hundreds or thousands of cores. If some get distracted, others reach out and pick up the workload. Everything is smooth, distributed, and uninterrupted. You may recall Mod Atlas talking about frame times when he posted about his GPU renderer last year: https://twitter.com/JagexAtlas/status/868131325114552321 Notice the part where he says it takes 25+ms on the CPU, but only takes 4-5ms on the GPU! That's 200-250 frames per second, if the framerate were uncapped! Also, side note: Just because a frame is completed in 1ms doesn't always mean your framerate will be 1000FPS. If your framerate is capped, then the painter will sit and wait after completing and presenting a frame until it's time to start painting again. This is why capping your framerate can be good for power usage, as demonstrated on mobile! Your GPU can't suck up your battery if it's asleep 90% of the time!
A GPU renderer is more efficient
Instead of piling all computational workloads and graphical workloads onto one single CPU core (rest in peace 8+ core users), a GPU renderer takes graphical work off the CPU and does it itself. I'd estimate the majority of all the work was graphical, so this will make a pretty noticeable difference in performance, especially on older systems. Before, having OSRS open while using other software would have a noticeable performance impact on everything. Especially on older computers. Not anymore! CPUs will run cooler, software will run better, and your computer may even use less power overall, since GPUs are much better at efficient graphical work than CPUs are!
All computers are already equipped to run this very VERY well
Most of the computers we have today are designed with two things: a good GPU, and an okay CPU. This isn't 2003 anymore. GPUs have made their way into everything, and they're prioritized over CPUs. They're not used just for games anymore, entire operating systems rely on them not just for animations and graphical effects, but entire computing tasks. GPUs are responsible for everything from facial recognition to Bitcoin mining these days. Not having a good one in your computer will leave you with a pretty frustrating experience - which is why every manufacturer makes sure you have one. Now, thanks to RuneLite, these will no longer be sitting idle while your poor CPU burns itself alive.
This new GPU renderer will make OSRS run much better on low end systems
Low end systems are notorious for having garbage like Intel Atom or Celeron in them. Their GPU is alright, but the CPU is absolutely terrible. Using the GPU will give them a boost from 5-15FPS in fixed mode, to around 50. At least, assuming they were made after the GPGPU revolution around 2010.
This new GPU renderer will make OSRS run much better on high end systems
High end systems tend to have huge GPUs and huge monitors. Right now, your GPU is asleep while your 4k monitor brings the current CPU renderer to its knees, on the verge of committing sudoku. Letting your GPU take on all that work will make your big and beautiful monitor handle OSRS without lag or stutter.
This new GPU renderer will open the possibility of plugins that build on top of it
One that comes to mind is a 2x/3x/4x GUI scaler. Scaling things in a graphics API is much easier than scaling it in some convoluded custom CPU renderer that was first designed to run in Internet Explorer 5.
It's easier to customize graphical variables in a GPU renderer than it is a glitchy old CPU renderer
Want night time? Change the light intensity. Want cel-shaded comic book appearance for some stupid reason? It's easy. Want to hit 60FPS on a Raspberry Pi? Change your render distance to 2 tiles. Now that the graphical work has been offloaded to a graphics API that's been literally designed to easily modify these things, the sky is the limit. See my past posts on this topic:
Big round of applause for the RuneLite team, and Jagex for allowing them to continue development. Without RuneLite, OSRS would be half the game it is today. Here's to their continued success, with or without Jagex integrating their code into the main game!
My own (x-post) If you like virtually building PCs, will you help me with mine again?
Tl;Dr: I have an i3, 8 gb of ram, and a GTX 960--help me build a new pc so i can appropriately game again? A few years back, I got some help building my current rig--but I went TOO budget and need to upgrade. That build (which I am currently posting from) is here: https://pcpartpicker.com/useG1ng3rBr3dd/saved/#view=bGybt6 Specs: - Intel i3-6100 CPU - GeForce GTX 960 GPU - G-Skill Single Slot 8gb RAM - Corsair CX 500 PSU - Asus VX238H-W 23.0" 1920x1080 Monitor (I think I need a higher fps monitor from what I've been told) - Cooler Master N200 MicroATX Mini Tower Case - Gigabyte GA-H110M-A Micro ATX LGA1151 Motherboard Gaming has been increasingly difficult and I want to actually enjoy gaming again without falling victim to being the lowest frame rate/highest latency on every server I enter. I gamed on my buddy's pc the other day and I was floored. I'm unsure what his specs are, but it made me realize that mine is just holding me back. I don't need a ProGaming+Streaming+Bitcoin Farming BEAST of a PC. I just want to enjoy gaming again and be able to for a few years with minimal upgrades. I really like PCpartpicker.com as I'm ignorant and it's highly user-friendly so if you are bored and like doing this stuff, I'd really appreciate the help picking the best parts for what I'm looking for at a price I can justify to myself. My top-tier budget is around $1200 but that may be insanely high or insanely low for what I'm asking; I'm not really sure. If I can keep and use some of the parts I already have (Tower case, motherboard [maybe], PSU[?]) That'd be awesome but I understand If I can't. Thank you for reading and thank you in advance if you decide to venture into this for me. I appreciate y'all.
I don't buy new laptops and when I do I try and get the most out of my graphics. Before you AMD ass lickers ban me, I like AMD. If I was going to Build a pc it would at least have an AMD cpu and maybe an 5700XT. I bought myself an Alienware,32gb ram, I7 6th gen and gtx 1070 for £600. Why I N T E L and N V I D I A? Easy answer. A Full AMD machine is S H I T. I can still see AMD fanboys saying "OMG FX were still good". NO! You can call me whatever you want but I like the better side. I liked Intel until this year which is when AMD really took over. Anyways I got sidetracked there. I only have one pc (bitcoin miner) with a GTX 1080 and I N T E L 2 quad (mining is gpu but not cpu intensive) and AMD does a very bad job in mining. So I only use a laptop. I'm going to change my laptop in about 3 years and it will be a 2/3 year old but still capable laptop. SO DOES THAT MEAN ITS GOING TO AMD? Cause you know AMD is the best. Here I'm going to dissapoint you. I have to be on the go and I cant have a pc. I never said I like AMD LAPTOP cpu's. In 3 years I'm getting a laptop with a 6 core cpu. Sadly AMD doesn't offer you 6 cores. BUT WAIT 7NM!!! Nope 12nm. BUT...... BUT ITS CHEAPER. I don't care the laptop is going to be cheap anyway. All higher end AMD cpu's have 4 cores. Here I want to start a discution and petition to have 6/8 core mobile Ryzen cpu's. And you know since AMD likes pushing make 12 core mobile cpu's.
Ok fantards, I'm sure your egos are way too fragile to actually break from the fanboy narrative here but we are definitely going lower. But, look at all the upgrades! Yeah, and we're going lower, I figure $25.00 is a nice re-entry point. There are a lot of reason for this, but look at the macro. This is the top for now, not just in AMD but also the general market. Everyone is on pins and needles waiting for another rate cut. Why? Because the economy is propped to a ridiculous level. No rate cut? It crashes. Rate cut without promise of future cuts? It crashes. Fed injecting massive amounts of liquidity? Check. Manufacturing jobs disappearing? Check. All time high after all time high? Check. The global economy not doing so hot and we're ignoring it completely? Check. Smart money is exiting and securing their short positions as we speak. They're buoying the market just long enough to set themselves up for the retracement. I was around for 2008. Everything was wonderful, totally rock solid until it wasn't. And I'll admit that we aren't looking at another 2008 but we are looking at a helluva correction. Ironically, AMD will be ok but not until next year. On that front, what do we have? Highest revs since 2007. Yay. See the irony? Rollout is too slow, PE is too high (yes it matters, especially when you're dealing with a manufacturer), too much hype around the CEO and too many new shares being dumped into the market. Yes, they are diluting, look at the numbers. I remember when they sought authorization for share issuance and all the usual fantards here started dumb-shaming anybody on here who dared question the notion that they might dilute. They were doing it "just so that they have the option of doing it" or some kind of nonsense like that was whet they said with indignant sanctimony. Well, they have been diluting and still are. It's not that bad compared to what it could be but it's still there and it affects the share price. Intel has been buying back shares and so has Nvidia. https://ycharts.com/companies/AMD/stock_buyback But don't worry, help is on the way. I think that they actually guided too conservatively for next quarter actually. I think that in the end AMD's superior tech will win decisive battles for market share. And, even though most of the "very smart" people on here throw a tantrum whenever I mention this: Crypto will be resurgent and AMD will benefit directly from it. The Ethereum mining hardware pool is diverse, they don't want just ASICs for a host of reasons. You can dismiss this out of hand at the behest of your own arrogance but it's the truth. it will make a difference in the bottom line. You're looking at massive crypto gains in 2020. I'm not going to explain why because he does it a lot better: https://www.tradingview.comfilbfilb/ I picked up GBTC when Bitcoin was at 7500 and just sold it at 9400. I'm waiting for 8250 range to re-enter. But when it blows up it will take GPU mine-able alts with it. And if you think that these miners aren't already anticipating this and aren't accumulating cards right now to avoid profiteering when Bitcoin breaks to the upside then you're delusional. Why do you think Radeons are "Selling like hot cakes"? The tunnel vision here is amazing. Whatever the mainstream narrative is you guys eat it up. Stop being such a bunch of fanboys.
Transcript of discussion between an ASIC designer and several proof-of-work designers from #monero-pow channel on Freenode this morning
[08:07:01] lukminer contains precompiled cn/r math sequences for some blocks: https://lukminer.org/2019/03/09/oh-kay-v4r-here-we-come/ [08:07:11] try that with RandomX :P [08:09:00] tevador: are you ready for some RandomX feedback? it looks like the CNv4 is slowly stabilizing, hashrate comes down... [08:09:07] how does it even make sense to precompile it? [08:09:14] mine 1% faster for 2 minutes? [08:09:35] naturally we think the entire asic-resistance strategy is doomed to fail :) but that's a high-level thing, who knows. people may think it's great. [08:09:49] about RandomX: looks like the cache size was chosen to make it GPU-hard [08:09:56] looking forward to more docs [08:11:38] after initial skimming, I would think it's possible to make a 10x asic for RandomX. But at least for us, we will only make an ASIC if there is not a total ASIC hostility there in the first place. That's better for the secret miners then. [08:13:12] What I propose is this: we are working on an Ethash ASIC right now, and once we have that working, we would invite tevador or whoever wants to come to HK/Shenzhen and we walk you guys through how we would make a RandomX ASIC. You can then process this input in any way you like. Something like that. [08:13:49] unless asics (or other accelerators) re-emerge on XMR faster than expected, it looks like there is a little bit of time before RandomX rollout [08:14:22] 10x in what measure? $/hash or watt/hash? [08:14:46] watt/hash [08:15:19] so you can make 10 times more efficient double precisio FPU? [08:16:02] like I said let's try to be productive. You are having me here, let's work together! [08:16:15] continue with RandomX, publish more docs. that's always helpful. [08:16:37] I'm trying to understand how it's possible at all. Why AMD/Intel are so inefficient at running FP calculations? [08:18:05] midipoet ([email protected]/web/irccloud.com/x-vszshqqxwybvtsjm) has joined #monero-pow [08:18:17] hardware development works the other way round. We start with 1) math then 2) optimization priority 3) hw/sw boundary 4) IP selection 5) physical implementation [08:22:32] This still doesn't explain at which point you get 10x [08:23:07] Weren't you the ones claiming "We can accelerate ProgPoW by a factor of 3x to 8x." ? I find it hard to believe too. [08:30:20] sure [08:30:26] so my idea: first we finish our current chip [08:30:35] from simulation to silicon :) [08:30:40] we love this stuff... we do it anyway [08:30:59] now we have a communication channel, and we don't call each other names immediately anymore: big progress! [08:31:06] you know, we russians have a saying "it was smooth on paper, but they forgot about ravines" [08:31:12] So I need a bit more details [08:31:16] ha ha. good! [08:31:31] that's why I want to avoid to just make claims [08:31:34] let's work [08:31:40] RandomX comes in Sep/Oct, right? [08:31:45] Maybe [08:32:20] We need to audit it first [08:32:31] ok [08:32:59] we don't make chips to prove sw devs that their assumptions about hardware are wrong. especially not if these guys then promptly hardfork and move to the next wrong assumption :) [08:33:10] from the outside, this only means that hw & sw are devaluing each other [08:33:24] neither of us should do this [08:33:47] we are making chips that can hopefully accelerate more crypto ops in the future [08:33:52] signing, verifying, proving, etc. [08:34:02] PoW is just a feature like others [08:34:18] sech1: is it easy for you to come to Hong Kong? (visa-wise) [08:34:20] or difficult? [08:34:33] or are you there sometimes? [08:34:41] It's kind of far away [08:35:13] we are looking forward to more RandomX docs. that's the first step. [08:35:31] I want to avoid that we have some meme "Linzhi says they can accelerate XYZ by factor x" .... "ha ha ha" [08:35:37] right? we don't want that :) [08:35:39] doc is almost finished [08:35:40] What docs do you need? It's described pretty good [08:35:41] so I better say nothing now [08:35:50] we focus on our Ethash chip [08:36:05] then based on that, we are happy to walk interested people through the design and what else it can do [08:36:22] that's a better approach from my view than making claims that are laughed away (rightfully so, because no silicon...) [08:36:37] ethash ASIC is basically a glorified memory controller [08:36:39] sech1: tevador said something more is coming (he just did it again) [08:37:03] yes, some parts of RandomX are not described well [08:37:10] like dataset access logic [08:37:37] RandomX looks like progpow for CPU [08:37:54] yes [08:38:03] it is designed to reflect CPU [08:38:34] so any ASIC for it = CPU in essence [08:39:04] of course there are still some things in regular CPU that can be thrown away for RandomX [08:40:20] uncore parts are not used, but those will use very little power [08:40:37] except for memory controller [08:41:09] I'm just surprised sometimes, ok? let me ask: have you designed or taped out an asic before? isn't it risky to make assumptions about things that are largely unknown? [08:41:23] I would worry [08:41:31] that I get something wrong... [08:41:44] but I also worry like crazy that CNv4 will blow up, where you guys seem to be relaxed [08:42:06] I didn't want to bring up anything RandomX because CNv4 is such a nailbiter... :) [08:42:15] how do you guys know you don't have asics in a week or two? [08:42:38] we don't have experience with ASIC design, but RandomX is simply designed to exactly fit CPU capabilities, which is the best you can do anyways [08:43:09] similar as ProgPoW did with GPUs [08:43:14] some people say they want to do asic-resistance only until the vast majority of coins has been issued [08:43:21] that's at least reasonable [08:43:43] yeah but progpow totally will not work as advertised :) [08:44:08] yeah, I've seen that comment about progpow a few times already [08:44:11] which is no surprise if you know it's just a random sales story to sell a few more GPUs [08:44:13] RandomX is not permanent, we are expecting to switch to ASIC friendly in a few years if possible [08:44:18] yes [08:44:21] that makes sense [08:44:40] linzhi-sonia: how so? will it break or will it be asic-able with decent performance gains? [08:44:41] are you happy with CNv4 so far? [08:45:10] ah, long story. progpow is a masterpiece of deception, let's not get into it here. [08:45:21] if you know chip marketing it makes more sense [08:45:24] linzhi-sonia: So far? lol! a bit early to tell, don't you think? [08:45:35] the diff is coming down [08:45:41] first few hours looked scary [08:45:43] I remain skeptical: I only see ASICs being reasonable if they are already as ubiquitous as smartphones [08:45:46] yes, so far so good [08:46:01] we kbew the diff would not come down ubtil affter block 75 [08:46:10] yes [08:46:22] but first few hours it looks like only 5% hashrate left [08:46:27] looked [08:46:29] now it's better [08:46:51] the next worry is: when will "unexplainable" hashrate come back? [08:47:00] you hope 2-3 months? more? [08:47:05] so give it another couple of days. will probably overshoot to the downside, and then rise a bit as miners get updated and return [08:47:22] 3 months minimum turnaround, yes [08:47:28] nah [08:47:36] don't underestimate asicmakers :) [08:47:54] you guys don't get #1 priority on chip fabs [08:47:56] 3 months = 90 days. do you know what is happening in those 90 days exactly? I'm pretty sure you don't. same thing as before. [08:48:13] we don't do any secret chips btw [08:48:21] 3 months assumes they had a complete design ready to go, and added the last minute change in 1 day [08:48:24] do you know who is behind the hashrate that is now bricked? [08:48:27] innosilicon? [08:48:34] hyc: no no, and no. :) [08:48:44] hyc: have you designed or taped out a chip before? [08:48:51] yes, many years ago [08:49:10] then you should know that 90 days is not a fixed number [08:49:35] sure, but like I said, other makers have greater demand [08:49:35] especially not if you can prepare, if you just have to modify something, or you have more programmability in the chip than some people assume [08:50:07] we are chipmakers, we would never dare to do what you guys are doing with CNv4 :) but maybe that just means you are cooler! [08:50:07] and yes, programmability makes some aspect of turnaround easier [08:50:10] all fine [08:50:10] I hope it works! [08:50:28] do you know who is behind the hashrate that is now bricked? [08:50:29] inno? [08:50:41] we suspect so, but have no evidence [08:50:44] maybe we can try to find them, but we cannot spend too much time on this [08:50:53] it's probably not so much of a secret [08:51:01] why should it be, right? [08:51:10] devs want this cat-and-mouse game? devs get it... [08:51:35] there was one leak saying it's innosilicon [08:51:36] so you think 3 months, ok [08:51:43] inno is cool [08:51:46] good team [08:51:49] IP design house [08:51:54] in Wuhan [08:52:06] they send their people to conferences with fake biz cards :) [08:52:19] pretending to be other companies? [08:52:26] sure [08:52:28] ha ha [08:52:39] so when we see them, we look at whatever card they carry and laugh :) [08:52:52] they are perfectly suited for secret mining games [08:52:59] they made at most $6 million in 2 months of mining, so I wonder if it was worth it [08:53:10] yeah. no way to know [08:53:15] but it's good that you calculate! [08:53:24] this is all about cost/benefit [08:53:25] then you also understand - imagine the value of XMR goes up 5x, 10x [08:53:34] that whole "asic resistance" thing will come down like a house of cards [08:53:41] I would imagine they sell immediately [08:53:53] the investor may fully understand the risk [08:53:57] the buyer [08:54:13] it's not healthy, but that's another discussion [08:54:23] so mid-June [08:54:27] let's see [08:54:49] I would be susprised if CNv4 ASICs show up at all [08:54:56] surprised* [08:54:56] why? [08:55:05] is only an economic question [08:55:12] yeah should be interesting. FPGAs will be near their limits as well [08:55:16] unless XMR goes up a lot [08:55:19] no, not *only*. it's also a technology question [08:55:44] you believe CNv4 is "asic resistant"? which feature? [08:55:53] it's not [08:55:59] cnv4 = Rabdomx ? [08:56:03] no [08:56:07] cnv4=cryptinight/r [08:56:11] ah [08:56:18] CNv4 is the one we have now, I think [08:56:21] since yesterday [08:56:30] it's plenty enough resistant for current XMR price [08:56:45] that may be, yes! [08:56:55] I look at daily payouts. XMR = ca. 100k USD / day [08:57:03] it can hold until October, but it's not asic resistant [08:57:23] well, last 24h only 22,442 USD :) [08:57:32] I think 80 h/s per watt ASICs are possible for CNv4 [08:57:38] linzhi-sonia where do you produce your chips? TSMC? [08:57:44] I'm cruious how you would expect to build a randomX ASIC that outperforms ARM cores for efficiency, or Intel cores for raw speed [08:57:48] curious [08:58:01] yes, tsmc [08:58:21] Our team did the world's first bitcoin asic, Avalon [08:58:25] and upcoming 2nd gen Ryzens (64-core EPYC) will be a blast at RandomX [08:58:28] designed and manufactured [08:58:53] still being marketed? [08:59:03] linzhi-sonia: do you understand what xmr wants to achieve, community-wise? [08:59:14] Avalon? as part of Canaan Creative, yes I think so. [08:59:25] there's not much interesting oing on in SHA256 [08:59:29] Inge-: I would think so, but please speak [08:59:32] hyc: yes [09:00:28] linzhi-sonia: i am curious to hear your thoughts. I am fairly new to this space myself... [09:00:51] oh [09:00:56] we are grandpas, and grandmas [09:01:36] yet I have no problem understanding why ASICS are currently reviled. [09:01:48] xmr's main differentiators to, let's say btc, are anonymity and fungibility [09:01:58] I find the client terribly slow btw [09:02:21] and I think the asic-forking since last may is wrong, doesn't create value and doesn't help with the project objectives [09:02:25] which "the client" ? [09:02:52] Monero GUI client maybe [09:03:12] MacOS, yes [09:03:28] What exactly is slow? [09:03:30] linzhi-sonia: I run my own node, and use the CLI and Monerujo. Have not had issues. [09:03:49] staying in sync [09:03:49] linzhi-sonia: decentralization is also a key principle [09:03:56] one that Bitcoin has failed to maintain [09:04:39] hmm [09:05:00] looks fairly decentralized to me. decentralization is the result of 3 goals imo: resilient, trustless, permissionless [09:05:28] don't ask a hardware maker about physical decentralization. that's too ideological. we focus on logical decentralization. [09:06:11] physical decentralization is important. with bulk of bitnoin mining centered on Chinese hydroelectric dams [09:06:19] have you thought about including block data in the PoW? [09:06:41] yes, of course. [09:07:39] is that already in an algo? [09:08:10] hyc: about "centered on chinese hydro" - what is your source? the best paper I know is this: https://coinshares.co.uk/wp-content/uploads/2018/11/Mining-Whitepaper-Final.pdf [09:09:01] linzhi-sonia: do you mine on your ASICs before you sell them? [09:09:13] besides testing of course [09:09:45] that paper puts Chinese btc miners at 60% max [09:10:05] tevador: I think everybody learned that that is not healthy long-term! [09:10:16] because it gives the chipmaker a cost advantage over its own customers [09:10:33] and cost advantage leads to centralization (physical and logical) [09:10:51] you guys should know who finances progpow and why :) [09:11:05] but let's not get into this, ha ha. want to keep the channel civilized. right OhGodAGirl ? :) [09:11:34] tevador: so the answer is no! 100% and definitely no [09:11:54] that "self-mining" disease was one of the problems we have now with asics, and their bad reputation (rightfully so) [09:13:08] I plan to write a nice short 2-page paper or so on our chip design process. maybe it's interesting to some people here. [09:13:15] basically the 5 steps I mentioned before, from math to physical [09:13:32] linzhi-sonia: the paper you linked puts 48% of bitcoin mining in Sichuan. the total in China is much more than 60% [09:13:38] need to run it by a few people to fix bugs, will post it here when published [09:14:06] hyc: ok! I am just sharing the "best" document I know today. it definitely may be wrong and there may be a better one now. [09:14:18] hyc: if you see some reports, please share [09:14:51] hey I am really curious about this: where is a PoW algo that puts block data into the PoW? [09:15:02] the previous paper I read is from here http://hackingdistributed.com/2018/01/15/decentralization-bitcoin-ethereum/ [09:15:38] hyc: you said that already exists? (block data in PoW) [09:15:45] it would make verification harder [09:15:49] linzhi-sonia: https://the-eye.eu/public/Books/campdivision.com/PDF/Computers%20General/Privacy/bitcoin/meh/hashimoto.pdf [09:15:51] but for chips it would be interesting [09:15:52] we discussed the possibility about a year ago https://www.reddit.com/Monero/comments/8bshrx/what_we_need_to_know_about_proof_of_work_pow/ [09:16:05] oh good links! thanks! need to read... [09:16:06] I think that paper by dryja was original [09:17:53] since we have a nice flow - second question I'm very curious about: has anyone thought about in-protocol rewards for other functions? [09:18:55] we've discussed micropayments for wallets to use remote nodes [09:18:55] you know there is a lot of work in other coins about STARK provers, zero-knowledge, etc. many of those things very compute intense, or need to be outsourced to a service (zether). For chipmakers, in-protocol rewards create an economic incentive to accelerate those things. [09:19:50] whenever there is an in-protocol reward, you may get the power of ASICs doing something you actually want to happen [09:19:52] it would be nice if there was some economic reward for running a fullnode, but no one has come up with much more than that afaik [09:19:54] instead of fighting them off [09:20:29] you need to use asics, not fight them. that's an obvious thing to say for an asicmaker... [09:20:41] in-protocol rewards can be very powerful [09:20:50] like I said before - unless the ASICs are so useful they're embedded in every smartphone, I dont see them being a positive for decentralization [09:21:17] if they're a separate product, the average consumer is not going to buy them [09:21:20] now I was talking about speedup of verifying, signing, proving, etc. [09:21:23] they won't even know what they are [09:22:07] if anybody wants to talk about or design in-protocol rewards, please come talk to us [09:22:08] the average consumer also doesn't use general purpose hardware to secure blockchains either [09:22:14] not just for PoW, in fact *NOT* for PoW [09:22:32] it requires sw/hw co-design [09:23:10] we are in long-term discussions/collaboration over this with Ethereum, Bitcoin Cash. just talk right now. [09:23:16] this was recently published though suggesting more uptake though I guess https://btcmanager.com/college-students-are-the-second-biggest-miners-of-cryptocurrency/ [09:23:29] I find it pretty hard to believe their numbers [09:24:03] well [09:24:09] sorry, original article: https://www.pcmag.com/news/366952/college-kids-are-using-campus-electricity-to-mine-crypto [09:24:11] just talk, no? rumors [09:24:18] college students are already more educated than the average consumer [09:24:29] we are not seeing many such customers anymore [09:24:30] it's data from cisco monitoring network traffic