12:14:05
plowsof:
RandomXv2 official pull request by sech1 https://github.com/tevador/RandomX/pull/317 (shared in #monero-pow)
15:21:40
user5864:matrix.org:
plowsof: wait, +52% more operation mean 52% more performences ?
15:22:27
DataHoarder:
it's a different pow function
15:22:59
DataHoarder:
it's not directly equivalent but yeah, it does way more work (when measured in instructions, memory accesses) and at the same time higher hashrate on modern cpus compared to v1
15:23:26
DataHoarder:
look at https://github.com/SChernykh/RandomX/blob/v2/doc/design_v2.md#4-performance-impact benchmarks
15:23:37
DataHoarder:
look at VM+AES/s, that is absolute work
15:23:47
user5864:matrix.org:
yeah I stumble on that, thanks
15:23:50
user5864:matrix.org:
woah
15:23:58
DataHoarder:
or relative work/joule to compare
15:24:28
DataHoarder:
so while older gens might end up slower H/s they also do more work than v1
15:25:43
user5864:matrix.org:
how does more instruction outputed result in lower hashrate
15:27:35
user5864:matrix.org:
this look very good for zen5 but all other seem to receive more bad outlook
15:29:48
DataHoarder:
cause v2 does 384 program size instead of 256
15:29:50
DataHoarder:
for example
15:30:14
sech1:
v2 hash is 1.5x more work than v1 hash
15:30:20
sech1:
so don't look that v2 hashrate is lower
15:30:21
DataHoarder:
yeah, modern ones benefit more (we push their memory further)
15:30:32
sech1:
multiply it by 1.5 first if you really want to compare hashrates
15:30:37
user5864:matrix.org:
hmm
15:30:44
user5864:matrix.org:
so I guess its not about hashrate, but higher share result ?
15:30:49
DataHoarder:
^ they are pow functions, you can't compare each other directly. compare VM+AES/s or relative work/joule
15:30:51
DataHoarder:
no
15:30:58
user5864:matrix.org:
hm
15:31:22
DataHoarder:
it's about work/joule (does more work per joule, aka more efficient)
15:32:50
user5864:matrix.org:
it just so counter intuitive. Hashrate lower but does output more operation
15:33:04
sech1:
Well, mine Bitcoin if you want more hashrate
15:33:05
DataHoarder:
cause you are comparing say
15:33:19
DataHoarder:
bitcoin sha hashrate to randomx hashrate
15:33:21
DataHoarder:
they do different things
15:33:23
DataHoarder:
same on v1 to v2
15:33:27
DataHoarder:
they do "different" things
15:33:30
user5864:matrix.org:
just trying to warp my brain around, im non technical sorry
15:33:43
sech1:
It's like dollar and euro
15:33:49
sech1:
Similar things, but not 1:1 rate
15:34:02
DataHoarder:
imagine this. randomx 2x which is two randomx hashes
15:34:03
sech1:
v1 hashes and v2 hashes
15:34:05
DataHoarder:
you have half hashrate
15:34:08
DataHoarder:
but 2x work
15:34:13
user5864:matrix.org:
ahh > <DataHoarder> they do different things
15:34:28
DataHoarder:
this is why you can't compare hashrate directly
15:34:54
user5864:matrix.org:
I see. Thanks for the explanation :))
15:34:58
DataHoarder:
in this case it's like randomx 1.5x :') but with more specific changes that CPUs do well
15:35:07
user5864:matrix.org:
you think it imght goes live next couple weeks ?
15:36:02
DataHoarder:
it needs a hardfork
15:36:25
user5864:matrix.org:
ohh. So I guess might came with fcmp one
15:36:43
user5864:matrix.org:
or around
15:37:35
user5864:matrix.org:
exciting times
15:40:11
user5864:matrix.org:
appreciate the hard work put behind everything of this
15:54:45
nioc:
so modern CPUs are better but I can't build a new rig cause ram has 4x in price :(
16:01:31
sech1:
If you undervolt and optimize for efficiency, a single RAM stick will do. 1 stick can handle up to 20-21 kh/s
16:03:52
DataHoarder:
you can use laptop memory :')
16:12:44
nioc:
hmmm
16:29:18
user5864:matrix.org:
nioc: good news is, ddr5 didnt affected that much the hashrate, on V1 at least
16:29:32
user5864:matrix.org:
so now 1 kit of ram can become 2 rigs
16:29:57
user5864:matrix.org:
should be interesting thing to test it out also, how ram effect hashrate on Zen5 with V2
16:46:11
DataHoarder:
Zen5 is still capped on bandwidth
16:46:37
user5864:matrix.org:
so it shouldnt change ?
16:47:00
DataHoarder:
I have slow ram on a zen5 platform and it was also equivalent. The prefetch trick makes latency tuning less critical afaik (but you still benefit)
16:47:26
DataHoarder:
In my own implementation, it was like +2 KH/s over V2 without the memory prefetch
16:47:55
DataHoarder:
Without that there were no large differences across generations
17:07:45
user5864:matrix.org:
thanks for the insight
18:27:07
n1oc:
[CCS Proposals] Lee Clagett opened merge request #637: Mark 2025 Q4 Month 1 Completed https://repo.getmonero.org/monero-project/ccs-proposals/-/merge_requests/637