04:20:46
jeffro256:
Yes, you're right, these should be updated. I can write the tests so that they use the master-derived values
04:21:49
jeffro256:
@boog900:monero.social: Here is a more concrete sketch of how a PQ turnstile could work, including key image composition: https://gist.github.com/jeffro256/146bfd5306ea3a8a2a0ea4d660cd2243
08:36:01
monerobull:matrix.org:
does that one cuprate db striping thing really save 65% of disk space?
08:36:15
monerobull:matrix.org:
cant be for the blockchain data, right?
10:50:17
boog900:
@monerobull:matrix.org: Hmm, its not saving 65%, its saving 35%, compared to our old DB. Its about 25% smaller than monerod. I'll ask hinto to updated the post making that clear.
10:50:57
monerobull:matrix.org:
ok but its still for the entire data or just monerod itself?
10:50:58
boog900:
So a cuprate db is 195GB for a real number
10:51:42
monerobull:matrix.org:
compared to monerods 260?
10:51:43
monerobull:matrix.org:
thats pretty good
10:51:53
boog900:
Yeah
10:51:59
monerobull:matrix.org:
wow, nice work
10:54:26
boog900:
There is no compression as well, its all in how the data is stored. LMDB uses a btree, we just append the data directly in a file. > <@monerobull:matrix.org> cant be for the blockchain data, right?
10:54:55
boog900:
For most tables, for some we still use LMDB
10:56:46
monerobull:matrix.org:
That should make it less likely to corrupt as well right?
11:06:08
boog900:
Because we have 2 databases we no longer have fully atomic updates as you can commit a tx on one and then crash before committing on the other. So in this sense in could be worse.
11:06:08
boog900:
We will have handling of this on startup to make sure the 2 DBs are in sync though.
11:06:08
boog900:
Both LMDB and the tapes support atomic updates so as long as you have the right setting it should be unlikely your DB gets corrupted.
11:33:13
monerobull:matrix.org:
neat
11:34:08
monerobull:matrix.org:
corruption is pretty rare, most reports come from people syncing on a raspberry pi (where it can take weeks to sync)
13:11:18
gingeropolous:
<< AI trigger warning >>i'll put this in the lounge because its ai slop: https://github.com/Gingeropolous/blocksizejavasim/tree/main , https://gingeropolous.github.io/blocksizejavasim/ . AI port of @spackle:monero.social 's https://github.com/spackle-xmr/Dynamic_Block_Demo/blob/main/Dynamic_Blocksize_econ_draft.py . WARNING: haven't manually reviewed the code to see that it matches or makes sense.
14:03:45
spackle:
That's genuinely awesome to see, and way more accessible than the standalone python scripts. This could be a really helpful tool for getting people to understand the scaling. I would try looking things over right now, but today (and the next few days) I have stuffed with plans.
14:04:53
spackle:
Thanks for trying this; I'll see about looking it over when I get the chance.
15:36:12
DataHoarder:
sech1: what would you call the operation now done under program_loop_store_hard_aes.inc, akin to existing ones (hashAes1Rx4 / fillAes1Rx4 / fillAes4Rx4 / hashAndFillAes1Rx4)?
15:37:23
DataHoarder:
mergeAesXXX?
15:37:38
sech1:
4Rx4 probably
15:39:49
DataHoarder:
good enough for now
16:04:39
hinto:
~65% of monerod's size, I updated the post to make it more clear > <@monerobull:matrix.org> does that one cuprate db striping thing really save 65% of disk space?
16:31:05
gingeropolous:
well running 5 million blocks on that javascript is taking 10 minutes and counting...
16:31:31
ofrnxmr:xmr.mx:
Just run 2000 blocks with high fees and max flood
16:31:49
gingeropolous:
i wanna see the long term median adjust
16:32:07
ofrnxmr:xmr.mx:
So 200k blocks?
16:32:23
ofrnxmr:xmr.mx:
Will allow you to see at least some of it
16:32:51
DataHoarder:
sech1: Implemented V2 (already had commitments) + sample testcases for V2 as well https://git.gammaspectra.live/P2Pool/go-randomx/commit/6065a45778bf12784e060d5a69a97e00c217d172
16:32:52
ofrnxmr:xmr.mx:
Also, the fees used, what are they? woukd be nice to have the fee tiers as options
16:33:06
DataHoarder:
I have checked these against the V2 RandomX PR and they all match :)
16:33:40
sech1:
nice
16:33:41
DataHoarder:
if you find it useful, tests.cpp https://privatebin.net/?c2f6614a8edab505#HMLzmc1y62rZhCSmU8vpQ5YQFzJyhhrAcGdoyfi42JFP
16:34:20
DataHoarder:
all using V2 (but ofc must test both so this is useful only for cross checking)
17:19:57
articmine:
@gingeropolous: 50000 blocks are needed for the long term median to change
17:20:20
articmine:
2000 is not enough
18:52:45
gingeropolous:
well its still running :(
21:15:20
Guest3:
Select large simulation mode to speed it up, with some loss of precision
21:18:15
Guest3:
Should not be so much less precise that you would notice a difference looking at graphs, but it does fudge things a bit.
21:19:18
Guest3:
But it will be much faster. Orders of magnitude faster for long simulations that build large blocks.
21:45:28
Guest3:
5 million is a lot, especially flooding. Try 500 thousand
22:17:50
ofrnxmr:xmr.mx:
5 million is more blocks than exist ..
22:18:16
ofrnxmr:xmr.mx:
263k per year
22:57:35
gingeropolous:
well it was still running. so i stopped it and am now trying 500k
23:01:59
gingeropolous:
i don't think this is working right. if the short term median is 30e6 at block 100k, then the long term median can't be 537k at the same spot.