03:59:32
reject-all:matrix.org:
In regards to this BS stuff, Is monero moving towards nodes only on data centers and high-end infrastructure?
04:06:22
elongated:matrix.org:
@reject-all:matrix.org: If it’s heavily spammed yes
04:13:55
reject-all:matrix.org:
If users are unable to run a node (without access to data centers, high-end equipment) does this threaten the decentralization (and therefore the censorship resistance) of Monero?
04:27:32
ofrnxmr:xmr.mx:
@reject-all:matrix.org: Obviously
04:27:51
ofrnxmr:xmr.mx:
Or (less rude) by definition, yes.
04:49:14
reject-all:matrix.org:
So you would say it's a requirement for the 'common' user with consumer hardware and typical bandwidth limitations to be capable of running a full node, and if otherwise Monero won't be a decentralized/permissionless?
04:49:14
reject-all:matrix.org:
Is this sufficiently being taken into account with regards to BS/scaling?
04:51:50
reject-all:matrix.org:
@ofrnxmr:xmr.mx
04:53:56
ofrnxmr:xmr.mx:
i wouldnt say a requirement, but a preference
04:55:45
ofrnxmr:xmr.mx:
Currently monero is running quite well on stressnet with 10+ mb blocks, including a couple old, hdd, quad core systems and a single core 2gb ram vm
04:56:46
ofrnxmr:xmr.mx:
As far as bandwidth, no. I don't think monero aims to support limited bandwidth, though we are working to reduce bandwidth by upwards of 70% from current
05:04:40
ircmouse:matrix.org:
New Monero research paper just dropped! "Inside Qubic’s Selfish Mining Campaign on
05:04:40
ircmouse:matrix.org:
Monero: Evidence, Tactics, and Limits"
05:04:40
ircmouse:matrix.org:
Link: https://arxiv.org/pdf/2512.01437 (https://arxiv.org/pdf/2512.01437)
05:04:40
ircmouse:matrix.org:
Credit to @ack-j:matrix.org for pointing it out in the MRL channel. Didn't see it posted here so wanted to share.
05:16:40
datahoarder:
^ I commented about this paper in MRL channel. TL;DR, they had limited/not granular data, estimated similar numbers as we did empirically from granular data
05:17:34
elongated:matrix.org:
@ofrnxmr:xmr.mx: How much storage does these “old” systems need to have ? To be future proof for 3-4yrs
05:17:56
reject-all:matrix.org:
@ofrnxmr:xmr.mx: Interesting, I'll try and get setup with stressnet on my PC.
05:17:56
reject-all:matrix.org:
But I do find something a bit unclear:
05:17:56
reject-all:matrix.org:
Users able to run full nodes without data centers/high-end equipment is by definition decentralization/censorship resistance.[... more lines follow, see https://mrelay.p2pool.observer/e/5fvFs88KeDRoR29V ]
05:22:06
ofrnxmr:
Dependa what you define as "common user" and "consumer hardware" and "typical bandwidth"
05:23:40
datahoarder:
@reject-all:matrix.org: there's a future with aggregated proofs that would allow a mixed version of pruned/full/archival. One where it downloads pruned txs, but each block has an aggregated proof that fully verifies the transactions. Archival nodes would keep the per-transaction full proofs, but they aren't needed for these lighter full verification nodes.
05:23:53
datahoarder:
Storage and bandwidth requirements for these would be vastly lower
05:24:21
ofrnxmr:
tevador's proposal seems to intend to keep up with "consumer hardware" advancements
05:25:22
ofrnxmr:
For full/archival nodes
05:33:38
ofrnxmr:xmr.mx:
@elongated:matrix.org: future proof with 10mb blocks = 3tb for year 1 :)
05:34:19
elongated:matrix.org:
@ofrnxmr:xmr.mx: Isn’t the consensus 100mb limit ?
05:35:33
elongated:matrix.org:
Just assuming some agency has its life mission to spam xmr 😅
05:35:51
elongated:matrix.org:
30tb/yr ? With 100mb limit
05:36:14
DataHoarder:
making it highly centralized due to storage/compute/bandwidth costs -> then strike the central locations :)
05:36:49
ofrnxmr:xmr.mx:
@elongated:matrix.org: 100mb is the packet size limit, wont be hit for 6yrs under tevador's or articmines proposals
05:37:45
ofrnxmr:xmr.mx:
Yeah, ppl yelling fud about 90mb limit dont realize that 90mb is 65gb per day
05:37:51
DataHoarder:
in chain data* the limit can strike due to other factors
05:38:26
elongated:matrix.org:
@ofrnxmr:xmr.mx: Thx to artic fans
05:39:26
ofrnxmr:xmr.mx:
i dont even think artic fans, just ppl who are claiming that im "breaking a promise" "for no reason"
05:39:55
ofrnxmr:xmr.mx:
Pointing to getmonero's retarded faq as proof that monero blocks are currently "unlimited"
05:40:40
DataHoarder:
People don't realize we use size_t and not arbitrary precision integers for packed and block sizes
05:40:56
DataHoarder:
can't even go past uint64_t block sizes!
05:41:08
ofrnxmr:xmr.mx:
https://www.getmonero.org/get-started/faq/#anchor-block-limit
05:41:08
ofrnxmr:xmr.mx:
> No, Monero does not have a hard block size limit. Instead, the block size can increase or decrease over time based on demand. It is capped at a certain growth rate to prevent outrageous growth (scalability).
05:41:53
elongated:matrix.org:
@ofrnxmr:xmr.mx: Needs to be fixed
05:42:00
DataHoarder:
technically there is a cap even if we send packets less bad
05:42:11
DataHoarder:
if block header itself reaches packet limit :)
05:42:28
DataHoarder:
100 MiB block headers would be ... interesting
05:42:51
DataHoarder:
just about 3 million tx hashes
05:45:07
ofrnxmr:xmr.mx:
WHY ARE YOU BREAKING MONERO'S PROMISE! DATAHOARDER IS A FED WHO IS TRYING TO HIJACK MONERO
05:45:25
DataHoarder:
1 exabyte (2^63) block size :)
05:46:25
ofrnxmr:xmr.mx:
has anyone looked to see if zano has the packet limit?
05:46:41
DataHoarder:
damn, you can no longer address the storage of the world in a single uint64
05:47:55
rbrunner7:
After reading this, and the Twitter thread it links to, I fear we could be near a total breakdown of any sensible discussion about block sizes. Out go technical arguments and sound logical reasoning, and emotions totally rule the day: https://old.reddit.com/r/Monero/comments/1pem7eq/soon_bch_will_be_the_only_contender_for_sound/
05:49:08
rbrunner7:
(There is currently a very detailed response to my comment from ArticMine caught in some filter, waiting for release.)
05:50:23
DataHoarder:
> This is all really strange considering that the current average block size is 100 kB and it's not possible to up it up to anything really big in a fast manner.
05:50:29
DataHoarder:
stressnet disagrees
05:51:20
rbrunner7:
Say, how much would it cost to produce a valid 50 MB block, mine it, and bring the network down with it? Can't be more than a few thousand dollars, I would guess? If I was rich I would be tempted to do that as an attempt to bring people to their senses.
05:51:24
DataHoarder:
yeah the temporary part (like, not even has to make it to the hardfork if fixed before) has been totally lost to the wind
05:51:40
DataHoarder:
if it's a miner, rbrunner7, effectively "free"
05:51:45
DataHoarder:
even better if they do 51%
05:52:01
DataHoarder:
they can pad their own blocks with txs to grow the median for free
05:52:04
rbrunner7:
Ah, yes, of course, because you get your expenses back :)
05:53:14
DataHoarder:
without majority hashrate you still need to spam, but you can get some better efficiency if you are already a mining pool
05:53:24
DataHoarder:
pad what you can and spam the rest
05:54:34
rbrunner7:
Maybe we can win over M5M400?
05:54:47
elongated:matrix.org:
@ofrnxmr:xmr.mx: They are safe with 0.01 zano tx fees
05:55:16
DataHoarder:
funnily qubic was padding their blocks with withheld txs
05:55:20
ofrnxmr:xmr.mx:
zano = 100mb (same code as monero) https://github.com/hyle-team/zano/blob/master/contrib%2Fepee%2Finclude%2Fnet%2Flevin_base.h#L90
05:55:20
ofrnxmr:xmr.mx:
And 50mb p2p https://github.com/hyle-team/zano/blob/master/src%2Fcurrency_core%2Fcurrency_config.h#L141
05:55:27
DataHoarder:
... but the max number of txs they could mine was, 20.
05:55:35
rbrunner7:
No, seriously, I think there are people right now that can only return to their senses quickly by hitting them on the head with a hammer.
05:55:43
DataHoarder:
so literally qubic had set a hardcoded limit into how many transactions could be included
05:56:02
DataHoarder:
zano 50mb limit!!!!
05:56:25
DataHoarder:
actually, we also do have 50mb packet size
05:56:30
DataHoarder:
and 100mb for levin
05:56:33
rbrunner7:
Yeah, but anyway not a contender for "sound money", so ...
05:56:47
DataHoarder:
MAX_RPC_CONTENT_LENGTH = 1048576 // 1 MB
05:56:53
DataHoarder:
DEFAULT_RPC_SOFT_LIMIT_SIZE 25 * 1024 * 1024 // 25 MiB
05:57:17
ofrnxmr:xmr.mx:
https://github.com/monero-project/monero/blob/master/src%2Fcryptonote_config.h#L141
05:57:17
ofrnxmr:xmr.mx:
monero has that same 50mb line
05:57:30
DataHoarder:
so maybe we are worse than we thought :)
05:57:46
DataHoarder:
in a worse place than*
05:57:58
rbrunner7:
Don't think you can get away with "unlimited logical block size, with a limit of 100 MB for individual block parts". See the word "limit" in there? That will be enough for people to freak out :)
05:58:12
DataHoarder:
block parts = txs
05:58:23
DataHoarder:
which is already bounded
05:58:24
ofrnxmr:xmr.mx:
Rbrunner, you didnt read getmonero.org? Blocks are unlimited
05:58:41
DataHoarder:
#define CRYPTONOTE_MAX_TX_SIZE 1000000
05:58:43
DataHoarder:
oh also
05:58:45
DataHoarder:
#define CRYPTONOTE_MAX_TX_PER_BLOCK 0x10000000
05:58:48
DataHoarder:
^ also size limit
05:59:09
ofrnxmr:xmr.mx:
oh thats racist
05:59:14
rbrunner7:
I am stealth "small blocker", what do you expect.
05:59:17
DataHoarder:
that is 2^28
05:59:23
ofrnxmr:xmr.mx:
you have to change that to infinity
06:00:12
DataHoarder:
1000000 * 2^28 bytes to TiB = 244 TiB blocks
06:00:22
DataHoarder:
yet another limit
06:01:35
rbrunner7:
Well, maybe we could live with that limit if we drop blocktime down to 1 second.
06:01:54
rbrunner7:
More transactions that way.
06:02:42
DataHoarder:
bring it down enough that speed of light and distance starts making 10+ blocks orphanable so all miners need to coexist the same server rack
06:28:24
kayabanerve:matrix.org:
If we had asynchronous consensus, blocks could be produced per throughput, not an arbitrary time interval.
06:39:12
rbrunner7:
Note to self: If people around me throw reason and logic overboard and act almost purely on emotion, it doesn't help if I do likewise as my reaction to this happening.
07:45:29
sech1:
With the current monerod limitations, miners will start limiting block sizes way before 100 MB
07:45:37
sech1:
I mean performance limitations
07:45:45
sech1:
Qubic even mined empty blocks for a while to be "more efficient"
07:50:15
sech1:
P2Pool has a packet size limit of 128 KB which limits it to max 4k transactions per block
12:10:43
gingeropolous:
we need to build a fab
16:30:49
sgp_:
lazy developers need to do their job https://www.reddit.com/r/Monero/comments/1peug7m/monero_developers_are_on_track_to_add_an/
16:31:57
sgp_:
shame on you all for prioritizing fcmp++. We all know that will lead to way worse privacy than simply allowing big blocks. shame!
16:38:18
rbrunner7:
A sad day for Monero. I can hear Monero's enemies rejoice.
16:40:01
sgp_:
This scaling death cult was a sleeping issue all along unfortunately. These network vulnerabilities finally being challenged is a step in the right direction
16:41:57
rbrunner7:
I would like to see LMDB manage a multi-terabyte blockchain file. Would be an interesting exercise.
16:48:47
syntheticbird:
@rbrunner7: LMDB2: eletric boogaloo when
16:53:47
boog900:
I am happy to see some push back on reddit, starting a propaganda war is stupid.
16:55:18
ofrnxmr:xmr.mx:
so.. wen serialization limit fixes? 7999 8867 9433
16:55:36
boog900:
cuprate has already done it :p
16:55:40
ofrnxmr:xmr.mx:
Since those limit blocks to ~30mb
16:56:21
ofrnxmr:xmr.mx:
@boog900: Right, but were fussing about 100mb genesis when we have a 30mb limit added in 2020
16:56:37
ofrnxmr:xmr.mx:
Thats been fixed since like 2021
16:56:48
boog900:
I am really surprised it has taken so long for 9433
16:57:30
boog900:
like the others I kinda get taking a while to review and whatever, but that should be a simple change.
16:57:53
ofrnxmr:xmr.mx:
Considering 9433 is just a stop-gap/bandaid, im also surprised that it hasnt yet been reviewed/merged
16:58:23
DataHoarder:
untested, removed the never used txin/txout values https://irc.gammaspectra.live/b11e6d8f7bdf2162/0001-remove-deserialization-and-serialization-code-for-tx.patch
16:58:42
DataHoarder:
mainnet node just works :')
16:58:53
DataHoarder:
> 6 files changed, 5 insertions(+), 292 deletions(-)
17:07:43
ofrnxmr:xmr.mx:
Had anyone compared 7999 and 8867 to see which actually perforns better?
17:09:55
boog900:
Proposed some different scaling parameters: https://github.com/seraphis-migration/monero/issues/44#issuecomment-3617687600
17:10:05
ofrnxmr:xmr.mx:
8867 has, aiui, started to be merged in pieces, but 7999 is the smaller pr and (again, aiui) has demonstrated much improved performance
17:40:17
gingeropolous:
so these things could address the 90MB block limit. And have been sitting in PR limbo since 2021
17:41:50
gingeropolous:
so it'll soon be 5 years that these fixes have sat there.
18:12:26
ofrnxmr:xmr.mx:
@gingeropolous: The 30mb limit
18:12:40
ofrnxmr:xmr.mx:
The 90/100mb limit is unaddressed
18:13:03
ofrnxmr:xmr.mx:
Theres also a 50mb p2p packet limit, also inherited
18:14:01
boog900:
@ofrnxmr:xmr.mx: I am working on a proposal to change how blocks are synced. Hopefully fixing this and adding a couple nice features.
18:14:21
boog900:
@ofrnxmr:xmr.mx: Also I checked and I can't see where this is enforced.
18:14:24
ofrnxmr:xmr.mx:
Fluffy blocks during ibd w/ split messages
18:14:48
ofrnxmr:xmr.mx:
@ofrnxmr:xmr.mx: ?*
18:17:12
boog900:
I mean we could reuse the messages but that wouldn't be ideal IMO. But if you just mean just the gist of fluffy blocks then yes.
18:20:43
ofrnxmr:xmr.mx:
download all block headers first, then add the txs 🧠
18:31:21
boog900:
If you are taking the mick, then I don't see why. I want to add more to it than just that, for example adding support for not always sending the miner tx in a broadcast and not disconnecting if the block has enough PoW but is invalid, plus some more. I would prefer us get all these changes in at once as its a good time to do it.
18:32:04
boog900:
Having a spec we can discus before I just put some code in Cuprate is the better way to do this.
19:08:43
nioc:
I will comment here as I don't have a github account, the comment "The spam attacks we have had in Monero were stopped by by the short term median."
19:08:52
nioc:
1) so we can distinguish spam :)
19:09:02
nioc:
2) I thought why this wasn't successful is that the blocks did not grow at the expected rate, that there was a bug that kept fees too low to expand the blocks
19:09:06
nioc:
I am thinking of the most recent episode, am I remembering this correctly?
19:17:01
rucknium:
nioc: Mostly the spam was using minimum fee. If the real users had auto-adjusted their fee to the next level, their txs would not have been delayed. I don't think the auto-adjust would have increased block size much because the vast majority of txs were the low-fee spam. More info: https://github.com/Rucknium/misc-research/blob/main/Monero-Black-Marble-Flood/pdf/monero-black-marble-flood.pdf
19:22:06
nioc:
yeah I though the low-fee spam was low fee due to incorrect auto-adjust
19:22:34
nioc:
vague memories
19:31:12
321bob321:
CRS
19:38:16
plowsof:
for nioc 2) https://github.com/monero-project/monero/pull/9219
20:23:26
tigerix:matrix.org:
I believe in the good will of the people in this community with the Blocksize limit.
20:23:26
tigerix:matrix.org:
Satoshi also introduced a Blocksize limit with good will for safety reasons. This turned out to be the nail in the coffin for Bitcoin as money.
20:23:26
tigerix:matrix.org:
This shouldn't be done, because temporary things usually stay the way they are. That's just life experience![... more lines follow, see https://mrelay.p2pool.observer/e/zuK5zc8KX19ob2VN ]
20:29:17
DataHoarder:
it's already in the code and introduced. we are trying to remove it.
20:29:32
DataHoarder:
it came with cryptonote.
20:31:08
tigerix:matrix.org:
Zcash has a Blocksize limit of 2 MB and thus will never be more than private gold. Monero can be more than that!
20:31:47
redsh4de:matrix.org:
To be clear, the blocksize will still be dynamic. The limit is not arbitrary like 1MB or 2MB, it is literally under what would break the Monero network with the current code if it gets there
20:33:24
redsh4de:matrix.org:
things start breaking at 32MB already
20:33:25
redsh4de:matrix.org:
the cap is 3x that
20:35:51
tigerix:matrix.org:
It sounds reasonable, but isn't this a nice problem to have?
20:35:52
tigerix:matrix.org:
I mean, if Monero gets used that much, great! We can introduce an emergency fix for that. But does it have to be rushed beforehand?
20:44:48
redsh4de:matrix.org:
@tigerix:matrix.org: It is not a nice problem to have if it renders the network unusable. What good is a unlimited block size if the nodes can't sync those blocks?
20:44:48
redsh4de:matrix.org:
Plan is to set a temporary cap on growth which would not be reached within 6 years. During that time it should be enough to resolve the underlying technical debt and fix the serialization issues with with the C++ daemon that prevent us from safely scaling. After that, the cap can be forked away, because nobody wants it to be t [... too long, see https://mrelay.p2pool.observer/e/4KOIzs8KLXBfUV9k ]
20:44:48
redsh4de:matrix.org:
On Bitcoin, the 1MB block size limit was set to avoid spam. The 90MB block growth cap here is to ensure Monero doesn't literally die by bigger blocks than the reference client can chew if it gets that much activity
20:48:25
articmine:
The 90MB cap doesn't do anything that is not addressed in my proposal, unless the fix for the 100 MB bug is not fixed in six (6) years
20:49:54
redsh4de:matrix.org:
Yes, it is imperative to fix the 100MB bug asap
20:49:55
tigerix:matrix.org:
If Monero gets used more and more, we see it way before the worst case Szenario happens.
20:49:55
tigerix:matrix.org:
To be fair, currently there is no Blockstream in Monero luckily, who wants to make money by offering custodial services. But we never know which state actor is trying to stear Monero in the wrong direction.
20:49:59
articmine:
What in reality is going on here is that people are arguing for this cap in order to avoid dealing with this bug during the next 6 years
20:51:07
articmine:
@tigerix:matrix.org: There does exist a conflict of interest with strong links to US Law enforcement
20:51:23
DataHoarder:
21:36:03 <br-m> <tigerix:matrix.org> I mean, if Monero gets used that much, great! We can introduce an emergency fix for that. But does it have to be rushed beforehand?
20:51:24
DataHoarder:
that's what we are trying to. not have to rush it. then forked away (or even remove before next hardfork if we fix the issues)
20:51:29
articmine:
Way worse than Blockstream
20:51:32
redsh4de:matrix.org:
@articmine: Not setting the cap would be a "gun to your head" to get it fixed within 6 years, yes. Unironically can be a motivating factor
20:51:41
DataHoarder:
like. it can be exploited today even worse with existing scaling
20:52:34
DataHoarder:
the packet size cap is 50 MiB, levin deserialization 100 MiB ... and as listed last night there are other fixed caps existing already inherited from cryptonote
20:52:52
articmine:
With existing scaling one needs about 5 months
20:54:25
articmine:
In fairness to cryptonote. In 2013 they were looking at over 2000 TPS. That was less than VISA back then
20:55:03
articmine:
The TX size was like ~500 bytes
20:55:39
articmine:
It was still a bad idea back then
20:56:42
gingeropolous:
i mean call me crazy pants, but I think 6 million transactions of FCMP is better privacy than 900 gajillion transactions of ringsize 16. FCMP gets in, then its optimize and fix all the things, like the PRs that have been sitting since 2021 that are kinda related i think
20:56:59
articmine:
Actually no
20:57:11
gingeropolous:
i really think we're missing the forest for the trees
20:57:17
articmine:
Especially with quantum computerss
20:57:56
gingeropolous:
well thats a whole other kerfuffle
20:57:57
redsh4de:matrix.org:
@gingeropolous: anon / perfect-daemon had made PRs that upgrade serialization/etc, right? Maybe we'll see something of that sort from him now again after his CCS got funded
20:58:34
DataHoarder:
21:57:29 <br-m> <articmine> Especially with quantum computerss
20:58:36
DataHoarder:
^ especially. FCMP++/Carrot includes specific changes to improve PQ
20:58:44
DataHoarder:
current addressing scheme does not.
20:58:47
articmine:
@gingeropolous: It t actually a valid research topic
20:58:53
boog900:
@articmine: are you saying RingCT is better than FCMP for QCs?
20:59:50
datahoarder:
^ > <@datahoarder> The conflict of interest here is delaying FCMP++ due to scaling issues which would already cover the part of breaking surveillance for rings, so that must be prioritized. Adding sanity scaling parameters/adjustments so that can exist happily with current implementations can speed the process of deploying this in an agreeable way and stopping BS
20:59:56
articmine:
I am saying that with QC forward privacy can be broken by combining BS with QC
21:00:35
DataHoarder:
goal shifting now, FCMP++ deals with BS, but suddenly, that's irrelevant
21:00:43
articmine:
One needs to hide the public keys. This is in the Carrot specification
21:01:13
boog900:
yes, currently you can break it without even the public keys. Its even worse.
21:01:14
DataHoarder:
the carrot specification was changed recently :)
21:01:16
DataHoarder:
I implemented it
21:01:39
articmine:
DataHoarder: It is not irrelevant, but it is not a complete panecea
21:02:42
articmine:
DataHoarder: Do you still need to hide the public keys to have forward secrecy?
21:02:45
DataHoarder:
given current BS and you placing that much importance in it, I'd say FCMP++ completely neuters them except in specific future cases, which we built protection/fallbacks for (PQ turnstile test being one)
21:03:30
boog900:
@articmine: even if you did this is a step up.
21:04:48
articmine:
It is a yes or no question?
21:05:13
DataHoarder:
internal sends (change) is protected, even if they know all your public keys ArticMine. non-internal sends are protected, given that all public keys are not shared.
21:05:37
DataHoarder:
if they know explicitly your target address (not any) they can do quantum stuff there to learn amounts
21:05:52
DataHoarder:
learning a different subaddress is not
21:06:23
DataHoarder:
also - https://github.com/jeffro256/carrot/pull/6
21:06:27
articmine:
... but some pubic are available to a BS adversary
21:06:48
articmine:
Public keys
21:06:50
boog900:
why are we taking about this at all?
21:07:00
boog900:
talking*
21:07:05
DataHoarder:
goal shifting boog900
21:07:10
boog900:
100%
21:07:11
DataHoarder:
articmine: you have 0,2, BS has 0,3, they learn nothing
21:07:22
DataHoarder:
they have 0,2, they learn amounts.
21:07:26
DataHoarder:
in quantum
21:08:29
articmine:
What does the sender have?
21:08:49
DataHoarder:
if exchange sends, they have 0,2
21:09:04
DataHoarder:
that is what BS has
21:09:11
DataHoarder:
but then you receive with 0,3. they get nothing
21:09:34
articmine:
So then BS has 0.2 for some of the public keys
21:09:44
DataHoarder:
0,2 IS the public key
21:09:50
DataHoarder:
it's not shared with 0,3 or 1,2
21:10:05
DataHoarder:
that's why they are derivated with proper methods
21:10:20
DataHoarder:
(I mean 0,2 as account/index)
21:11:07
DataHoarder:
basically. exchange sends you money at address A (0,2).
21:11:42
DataHoarder:
They can break it! (but they already have the info). They can later break using quantum outputs received with specifically A (0,2)
21:12:06
DataHoarder:
you receive using B (0,3). this is not broken, this is an entirely new set of public keys
21:16:21
articmine:
Yes but all the current outputs were received with 0.2
21:16:52
DataHoarder:
what do you mean
21:17:07
DataHoarder:
so they already know the details of the outputs?
21:17:09
DataHoarder:
then why do they need to learn them
21:17:14
DataHoarder:
there isn't carrot deployed yet
21:17:23
DataHoarder:
that's why it's important to have it
21:17:38
articmine:
The existing outputs if the public keys are known
21:17:48
articmine:
The address
21:18:06
DataHoarder:
there aren't carrot existing outputs
21:19:11
articmine:
Correct, but if they are not transferred after FCMP++ they are still vulnerable
21:19:27
DataHoarder:
they are vulnerable regardless. it's not carrot outputs
21:19:41
DataHoarder:
so yes, migrating to FCMP/Carrot is important
21:19:56
DataHoarder:
without carrot you don't even need pubkeys > 22:01:24 <br-m> <boog900> yes, currently you can break it without even the public keys. Its even worse.
21:20:29
articmine:
DataHoarder: You have to
21:22:07
DataHoarder:
why are non-carrot outputs even considered. they are broken under a quantum adversary directly
21:22:24
DataHoarder:
when you bring it up. migrating would be a factor. I'm answering > 22:00:07 <br-m> <articmine> I am saying that with QC forward privacy can be broken by combining BS with QC
21:24:46
articmine:
Because BS relies on correlations between outputs,. So broken some not broken
21:25:03
articmine:
They are not isolated from each other
21:29:06
articmine:
The worst part of all of this is it doesn't even have to work. All the government has to do is to convince a judge and not a professional mathematician that it works
21:31:30
articmine:
Then they can convict an innocent person
21:33:07
DataHoarder:
so back to hypothetical that regardless what we deploy even sending everything to burn addresses, a judge can convict you
21:33:14
DataHoarder:
so doesn't matter what we do. close the chain right?
21:34:33
articmine:
My point is that we need multiple layers. Not just one protection
21:35:15
articmine:
... and yes sheer volume can and should be part of the equation
21:35:20
DataHoarder:
so let's deploy these layers no? specially the ones that cover PQ and ring surveillance
21:36:47
articmine:
I am not against FCMP++ What I am against is a fanatical push to keep the existing chain as small as possible
21:37:38
DataHoarder:
I don't think it's fanatical nor push to keep at small. but to allow it to grow safely without exploding and causing chain splits that require emergency changes
21:40:10
articmine:
To give an example. Many of the devs are concerned about a growth rate of 2 and propose growth rates between 1.2 and 1 7. I come up with an effective growth rate of 1.085 and they ask for more and more drastic restrictions
21:40:29
DataHoarder:
so bring people or gather devs for making it work now, instead of bringing people to bicker around 90 MiB permanent size forever, which was not discussed at all. We can remove it before next hard fork, let's do so, but otherwise a bomb is left planted (which is already there)
21:40:33
boog900:
way to misrepresent it
21:40:41
boog900:
disgusting
21:40:52
boog900:
trying to win the propaganda war again
21:41:33
boog900:
I have said again and again my position, here it is: https://github.com/seraphis-migration/monero/issues/44#issuecomment-3617687600
21:41:52
boog900:
your 1.085 increases no matter what
21:42:03
articmine:
Your position is 1.2.
21:42:10
boog900:
not exactly equivalent
21:42:15
articmine:
I am offering 1.085
21:42:21
boog900:
oh my days
21:42:49
articmine:
@boog900: Over time it is
21:43:12
boog900:
if my proposal was really more, you would like it more, the more dangerous the better right?
21:44:06
articmine:
@boog900: No I am arguing for shot to medium term flexibility
21:44:35
articmine:
Long term no more than 1.5x per year
21:45:23
articmine:
I originally had a long term sanity median of 1000000 blocks
21:45:42
articmine:
With a growth rate of 2
21:46:36
articmine:
I actually believe that Tevador's proposal is way better for a sanity cap
21:46:50
boog900:
@articmine: where? not in the proposal I am looking at
21:47:38
articmine:
I have given multiple talks with the long term sanity median of 1000000 bytes
21:47:56
boog900:
as so this one proposal in the past that wasn't the one you wanted for FCMP?
21:48:03
boog900:
like come on
21:48:42
articmine:
The last was at MonerKon 2025
21:48:50
boog900:
I wont be talking about this with you anymore, we gone round in circles enough over the past couple weeks.
21:48:57
articmine:
MoneroKon
21:50:25
articmine:
I even discussed this there with Jeffro256 who told me it was unnecessary. That is why I took it out
21:50:47
articmine:
Of CT this was fi FCMP
21:50:53
articmine:
Of course
21:52:21
articmine:
@boog900: Then don't.
21:53:25
articmine:
I know what is really going on her. It has nothing to do with scaling parameters
21:53:47
articmine:
here*
21:54:15
DataHoarder:
we have pointed at the specific code that would break already. bring people, or let the existing people to fix it without sending hordes in misinterpreted social messages
21:54:55
DataHoarder:
there isn't a consensus for 90 MiB anymore, besides the 5m where there was an abstain from you. so, why all of this?
21:55:25
DataHoarder:
same limit exists on all other cryptonote derivations, too
21:56:40
DataHoarder:
in the end it doesn't matter if the technical limit is in or not. BS will exploit it and kill the network :)
21:57:39
DataHoarder:
or well, have some emergency deployment. wouldn't that be fun
21:57:43
sech1:
I think it's more of a philosophical question. Any software has its limits. Even if Monero declares "unlimited" block size, there's always a physical limit of what the network can handle. Dev team's responsibility is to ensure that this limit is always bigger than the real world usage at any time, but setting a fail-safe (hard cap for the known
21:57:43
sech1:
value of the limit) is perfectly normal, assuming that this hard cap gets increased with every node optimization (every new software release)
21:57:59
articmine:
I ABSTAIN that does not mean I support it. When I see posts on r/BTC on this. This tells me that this 90 MB limit is very controversial outside of t MRL
21:58:19
DataHoarder:
we couldn't do an emergency release for dns checkpoints either because of existing technical debt, too. it's not a first
21:59:20
DataHoarder:
ofc, you abstaining can still mean you are against. usually it means that you let the rest of the consensus move forward and not move instead to try to misrepresent it elsewhere
21:59:53
articmine:
I am not misrepresenting this
22:00:55
DataHoarder:
not what I have seen on reddit comments, unless that's an impostor, if so they have done a great job.
22:01:15
articmine:
I was actually shocked by the reaction to me including Tevador's proposal into mine
22:01:16
DataHoarder:
as sech1 said "assuming that this hard cap gets increased with every node optimization (every new software release)" < I think that's the point of the technical cap.
22:03:06
sech1:
So I'm against making this a consensus rule (fixed max block size). Rather make it a constant that can be changed in a point release, and ensure that scaling rules don't let the network reach the cap quickly
22:03:28
sech1:
so the team has the time to react if network load changes
22:03:40
sech1:
quickly -> in less than 2-3 years
22:03:49
articmine:
sech1: Honestly this does not work
22:04:03
sech1:
I disagree
22:04:14
articmine:
It actually broke Bitcoin in 2013
22:04:20
sech1:
Make scaling rules work such that 90 MB can't be reached in less than 3 years
22:04:36
sech1:
If blocks start to grow, team has 3 years to react and optimize the node, and increase the hard cap
22:04:54
articmine:
sech1: My proposal means it cannot be reach in over 6 years
22:05:17
DataHoarder:
they can feed these blocks via other means, not chain growth
22:05:27
articmine:
YeT this is not enough for some people
22:05:28
DataHoarder:
it will get deserialized
22:06:18
sech1:
yes, 90 MB blocks can be crafted and sent via RPC, or even P2P as a new top chain candidate. Nodes will have to process them
22:06:34
sech1:
which is why a limit is needed, but not as part of consensus rules
22:06:35
articmine:
DataHoarder: How.?
22:06:45
sech1:
it's a technical limitation, not a consensus rule
22:06:56
sech1:
you have to choose between node crashing or node just refusing such blocks
22:07:11
sech1:
either way, there is a hard cap (implicit in the first case)
22:08:12
articmine:
If this can be done outside of consensus rules then I change my vote from ABSTAIN to YES > <sech1> which is why a limit is needed, but not as part of consensus rules
22:08:32
articmine:
On the 90 MB cap
22:09:41
DataHoarder:
> <sech1> which is why a limit is needed, but not as part of consensus rules
22:09:43
DataHoarder:
^ semi-consensus, if 90 MiB blocks cannot be broadcasted then nodes can fall behind if network is fed txs in specific ways
22:09:52
DataHoarder:
if it could be done in point releases, that'd be nice.
22:10:16
sech1:
if it gets to the point when we have 90+ MB blocks and some nodes can't sync, these nodes have to update, right?
22:10:24
sech1:
Because the fixed version will be available at this point
22:10:42
sech1:
Remember about the 3+ years lead time due to scaling rules
22:10:46
articmine:
So a node relay rule. No problem here
22:10:57
DataHoarder:
unless those are mining nodes and they make a longer chain, sech1
22:11:17
DataHoarder:
tx node relay rule gets skipped for found blocks with those txs as example
22:11:19
sech1:
Then it's just miner consensus, not a problem
22:11:30
sech1:
I think pools will self-regulate when blocks get big
22:11:49
sech1:
They won't allow their nodes to become too slow
22:11:51
articmine:
DataHoarder: A longer chain that crashes
22:11:55
DataHoarder:
maybe it can be brought to the table next MRL with more details
22:12:02
sech1:
so they'll limit blocks to a few MB or whatever value their servers can handle
22:12:10
DataHoarder:
that longer chain has less block size, so no it doesn't ArticMine
22:12:12
DataHoarder:
that's why they made it longer
22:12:45
DataHoarder:
but then - the existing limit is already there :')
22:13:11
articmine:
DataHoarder: This did not work for Bitmain in 2018
22:13:13
DataHoarder:
though a well placed limit is consistent instead of having secondary pieces throw exceptions or err
22:13:22
articmine:
That is history
22:13:31
DataHoarder:
people moved from MineXMR but they went to Qubic
22:13:32
DataHoarder:
:)
22:17:31
articmine:
Yes but blocks refusing to relay over 90 MB because they crash is very difficult to fight.
22:17:31
articmine:
Then there is my proposal that blocks over 90 MB for over 6 years
22:17:38
syntheticbird:
DataHoarder: if people = cfb and its llm bots then yeah surely
22:17:55
articmine:
In consensus
22:23:34
articmine:
The way to harden a node relay rule on this is to set the node relay cap at 45 MB. Then miners will need over 51% to override the nodes.
22:24:08
articmine:
So I will support a node relay rule at 45 MB
22:27:25
diego:cypherstack.com:
Anything that delays FCMP is bad imo
22:27:50
diego:cypherstack.com:
Once FCMP is ready, we need to get it live. Monero's privacy is currently porous.
22:36:59
diego:cypherstack.com:
I have no such links, and I only care about the proposal that gets us to FCMP++ the fastest. > <@articmine> There does exist a conflict of interest with strong links to US Law enforcement
22:39:02
diego:cypherstack.com:
And I would say anyone who says anything other than FCMP++ as an absolute priority is the one with suspect intentions, given how substandard Monero's current privacy protocol is in comparison other serious privacy tech.
22:39:34
diego:cypherstack.com:
Though that's potentially an inflammatory argument that looks too much at people, so I say it very lightly.
22:40:26
diego:cypherstack.com:
I know I am in no way the decision maker anywhere, but I want FCMP launch Q2 2026
22:40:57
diego:cypherstack.com:
And I have been burning my crypto boy candles at both ends to get it there.
22:41:13
diego:cypherstack.com:
FCMP first, scaling immediately after if need be.
22:42:14
diego:cypherstack.com:
It's not an indefinitely pushed discussion. It is just a very very VERY distant second to get FCMPs out. Once out, it can be first on the agenda.
22:46:47
diego:cypherstack.com:
One more elaboration if I may, the cryptographers working for me are also concerned about Monero scaling. We would be among the first to insist on and contribute to further scaling discussions after FCMPs goes live.
22:46:49
rucknium:
@diego:cypherstack.com: If I can prod you a bit, your position is also not an enlightened one. "Set scaling discussion aside" on its face means keep the current scaling algorithm. Many people think the current scaling algorithm allows large blocks too quickly. This is the "anchoring" problem in negotiations. It also shows how [... too long, see https://mrelay.p2pool.observer/e/w5rH0c8KTHNFdy14 ]
22:47:51
diego:cypherstack.com:
I am "fix it immediately after fcmp" not "fix it later"
22:48:32
articmine:
One can actually do FCMP++ with the current scaling untouched
22:48:40
diego:cypherstack.com:
Later is nebulous. "Immediately after fcmp" is expectation if a hard fork within the next year if not sooner after FCMP to implement scaling solutions.
22:48:54
articmine:
This does actually work
22:49:02
diego:cypherstack.com:
@articmine: This was my understanding, yes.
22:51:13
diego:cypherstack.com:
I hate to break it to everyone, but we don't have a massive flood of people just waiting for FCMP before they do all of their txs which will bring us right to the brink right away.
22:51:48
diego:cypherstack.com:
We have time. Not infinite time. And not enough time to sit on our laurels, but a bit of time. A year's worth at least.
22:52:13
diego:cypherstack.com:
(yes "a year" is pulled out of my butt)
22:53:58
diego:cypherstack.com:
Since my 4 crypto boys been picking apart Monero and FCMP non-stop for the past year, it has become very clear to me that nothing is as remotely close as important as getting FCMP++ live. And if the network won't blow up in a year (it won't), I don't think we can afford delays.
22:54:56
diego:cypherstack.com:
I'm preaching to the choir here, but you all know privacy is an arms race, and RingCT might as well be 1950s tech at this point with how fast the space moves
22:58:08
articmine:
@diego:cypherstack.com: The 1950's bandwidth did not support centralized ledgers such as VISA at even a fraction of what Monero currently does in transactions per second
22:58:32
diego:cypherstack.com:
@articmine: I've attended every one of your C3 talks. I know. :P
22:59:25
diego:cypherstack.com:
And it was hyperbole anyways. My point is, the arms race moves fast, and Monero hasn't taken a meaningful step forward since RingCT.
22:59:34
articmine:
... now supporting FCMP with the current scaling is a piece of cake compared to thst
22:59:59
diego:cypherstack.com:
And raising the ring size barely counts
23:02:05
DataHoarder:
ring size 1024 ought to be enough
23:42:24
articmine:
This assumes that the US Government and Chainalysis can fend off the legal counter attack in the courts. If they fail we could see a sudden flood of transactions on chain > <@diego:cypherstack.com> I hate to break it to everyone, but we don't have a massive flood of people just waiting for FCMP before they do all of their txs which will bring us right to the brink right away.
23:46:14
articmine:
This is an example why I am so opposed to a lower than 2x growth rate for the long term median.
23:49:23
articmine:
By the way if they fail, I am seriously considering adding fuel to the fire by pursuing legal action in the EU against the delisting of Monero from centralized exchanges.
23:52:30
articmine:
By the way this is orthogonal to the proposed 90 MB cap in the consensus rules.