01:10:10
monero.arbo:matrix.org:
I think at some point, soon, we need to move on without ArcticMine because these kind of suggestions are wasting everyone's time > <@articmine> I GB blocks is 100 Mbps bandwidth. This is appropriate for a hard cap
01:10:27
monero.arbo:matrix.org:
it's just going in circles with them refusing to budge, when nobody else is really aligned with how they see things. it's not productive
01:36:39
ofrnxmr:
We have all the way until fcmp and carrot are upstreamed to decide on this detail
01:36:54
ofrnxmr:
So i disagre with "soon", as its not really blocking anything
01:38:24
ofrnxmr:
its not a simple matter of who's opinion prevails, but what facts are brought to the table to support the decisions
01:39:56
ofrnxmr:
What most people agree on, is that the hardware and software need to be capable of supporting whatever we go with, and i think there is majority consensus on hard forking to increase scaling should there be breakthroughs in hard/soft limits
05:27:42
0xfffc:
https://news.ycombinator.com/item?id=46072786 time to run fcmp paper through deepseek math v2 (Math Olympic gold level) too.
09:11:42
articmine:
If this is the attitude then leaving the scaling parameters as they are is the simplest and best solution > <@monero.arbo:matrix.org> I think at some point, soon, we need to move on without ArcticMine because these kind of suggestions are wasting everyone's time
09:17:47
articmine:
A cap at 1GB blocks scaled at 1.5 x per year is BELOW all of the suggestions I have seen regarding the long term median.
09:17:47
articmine:
This cap is in conjunction with my existing proposal with the lowest of the two block sized controlling
09:19:09
articmine:
By the way a soft fork with a much lower cap is also part why I am looking at.
09:24:19
articmine:
Can someone please explain to me what is so unreasonable about a bandwidth requirement of 100 Mbps when fibre residential connections at 1Gbps are readily available?
09:24:19
articmine:
We are talking about a sanity check here, not what the market will likely demand
14:36:31
articmine:
First it is not up to the miners. Did you read the latest proposal
14:37:47
articmine:
The history of Bitcoin proves my point. There is simply no guarantee and no reason is needed
14:40:01
articmine:
I have been around cryptocurrency since 2011.
14:40:43
boog900:
In your view why hasn't bitcoin increased its block size?
14:40:54
lm:matrix.baermail.fr:
@articmine: is this the one ? https://github.com/ArticMine/Monero-Documents/blob/master/MoneroScaling2025-11-02.pdf
14:41:14
lm:matrix.baermail.fr:
@articmine: There was a reason back then, to avoid spam
14:42:27
articmine:
@lm:matrix.baermail.fr: Do you really believe that. I have a bridge to sell you. Will accept XMR
14:43:06
articmine:
The real t is why was it not changed?
14:43:17
articmine:
Reason
14:43:37
lm:matrix.baermail.fr:
@articmine: The issue with bitcoin is that they don't accept change, monero community do
14:44:28
articmine:
There has been a lot of change in Bitcoin
14:45:17
boog900:
If the community don't like the idea of big blocks we will just have a HF to remove them? I don't think primitively making stupidly big blocks possible is going to preserve dynamic blocks if there is consensus against it.
14:46:52
boog900:
actually it would just be a soft fork to remove it you wouldn't even need full consensus
14:47:10
articmine:
@boog900: Big block has been been possible in Monero since it's launch
14:47:44
boog900:
completely avoided my comment again
14:48:01
boog900:
its so hard to talk to you
14:48:33
ofrnxmr:xmr.mx:
@boog900: you dont even need a soft fork. Someone able to produce 51/100 blocks can reduce it by producing small block templates
14:49:36
boog900:
@ofrnxmr:xmr.mx: soft fork requires 50% of hash power, but yeah you are completely correct it just needs 50% of blocks, which is 33% with selfish mining IIRC
14:50:16
articmine:
@ofrnxmr:xmr.mx: One can do this by miner voting, but that is not enough.
14:50:46
ofrnxmr:xmr.mx:
no, i mean, you can disrupt the medians by producing atificially small blocks
14:51:28
ofrnxmr:xmr.mx:
Like qubic, who didnt include normal txs. Each time they produced 51/100 blocks, they effectively reset the short term scaling
14:52:57
articmine:
@ofrnxmr:xmr.mx: Yes if over 50% mines below a certain threshold. The medians will not move beyond that threshold so one has a cap
14:53:08
articmine:
That is my point
14:54:23
articmine:
@ofrnxmr:xmr.mx: They could not reset a median that had not moves
14:55:13
ofrnxmr:xmr.mx:
lets say blocks had grown to 2mb. qubic mining 51/100 blocks would have reset it back to 300kb
14:55:45
articmine:
That is correct
14:56:10
boog900:
So a community that is against scaling will be able to prevent scaling anyway
14:56:15
articmine:
... and they can keep it there
14:56:34
articmine:
@boog900: Correct no HF needed
14:56:34
boog900:
^ > <@boog900> If the community don't like the idea of big blocks we will just have a HF to remove them? I don't think primitively making stupidly big blocks possible is going to preserve dynamic blocks if there is consensus against it.
14:57:20
articmine:
No need for a HF. Just don't mine them
14:57:50
articmine:
over 51%
14:58:23
articmine:
Furthermore it is completely reversible
14:58:40
boog900:
So the argument that we might not be able to remove the slow growth if it is added is not a great argument IMO.
14:59:37
articmine:
Do you trust the community?
15:00:27
datahoarder:
@articmine: I trust the community but not miners that hop around not understanding that monero doesn't have ASICs just for $$$. If someone gives 2x $$$ they will do anything
15:00:31
boog900:
I trust that whatever happens in 2 years the community can change what they like anyway.
15:01:05
articmine:
Who is the community?
15:02:29
lm:matrix.baermail.fr:
@articmine: The good question is does the mining hash power represent well the community.
15:02:49
lm:matrix.baermail.fr:
In the end miners are deciding, devs are only proposing.
15:03:32
boog900:
@articmine: The combined decision processes of devs, miners, node operators.
15:03:35
boog900:
Etc
15:03:58
ofrnxmr:xmr.mx:
i think the biggest issue with large blocks is medium-long term storage (N GiB / day), IBD (unable to sync blocks over ~30mb, and definitely over 100mb), and wallet sync. The latter probably the biggest issue for p2p cash. High tx throughput will, on its own, be bottlenecked by bandwidth, verification, txpool limits, and again.. wallet sync, as wallet also need to parse the txpool
15:05:42
datahoarder:
Instead of hashpower deciding it should be the economic majority deciding then. Time to open PoS discussions again so PoS can decide blocks. Then someone can just pay $$$ to get their option chosen instead of going via the Qubic $$$ method
15:06:16
boog900:
@lm:matrix.baermail.fr: Miners can only decide on so much without majority hash rate, we can change the algorithm so it doesn't adjust so quickly for example
15:06:38
ofrnxmr:xmr.mx:
@boog900: I wanted stm to be like..720 blocks
15:06:58
datahoarder:
@ofrnxmr:xmr.mx: Passing block headers around via Tor is already slow, and that just includes txids, now imagine passing 1GB blocks per tor node multiple ways. At that point we better fund Tor itself as it'd be most of the traffic
15:07:27
articmine:
@boog900: How does this help with a failure point at 100NB?
15:08:00
sgp_:
Artic read the room. No one wants massive blocks in a year. Your proposal doesn't make sense. Even kayaba's hardcored value (which can be changed, I don't buy that just this one change will be permanent) is way easier to support.
15:08:05
datahoarder:
You might have 1Gbit connection but then use Tor. Is Tor something that is to be supported by Monero in the future? Then a cap does make sense
15:08:49
datahoarder:
You will then have at most 2-10Mbit/s when communicating across Tor peers
15:08:57
boog900:
@articmine: I wasn't talking about that, that was a direct reply to their point
15:09:12
datahoarder:
That's 14 minutes to sync 1 GiB
15:09:29
datahoarder:
~80 seconds for 100 MiB blocks
15:09:52
datahoarder:
25s for 32 MiB blocks
15:10:09
datahoarder:
Usually depending on guard nodes and other conditions this will be way slower
15:11:01
datahoarder:
ofc, this gets spread over time as it's transaction data, but they would still start falling behind
15:11:07
ofrnxmr:xmr.mx:
For IBD, 25s for 32mb blocks isnt disastrous, and while fully synced, it should still be much faster due to fluffyblocks (you already have the txs, inlst cases)
15:11:39
datahoarder:
@ofrnxmr:xmr.mx: and that's spread over time (tx pool)
15:11:57
datahoarder:
not instant when block is received, but should account for that bad case of all txs being in block only
15:12:13
ofrnxmr:xmr.mx:
Like when maraton or qubic doesnt share their txs
15:12:27
datahoarder:
at 2 Mbit/s it's ~2m or so
15:13:10
articmine:
@datahoarder: This is not bandwidth
15:13:19
ofrnxmr:xmr.mx:
@ofrnxmr:xmr.mx: This is obv an attack vector, large blocks w/ an emtity that broadcasts a large block that had private txs
15:13:30
datahoarder:
@articmine: This is tor. see above messages.
15:13:45
datahoarder:
On tor that bandwidth ends up shared across a few guard connections
15:14:05
datahoarder:
if not one only
15:15:17
articmine:
Do we have to sync over TOR?
15:15:56
datahoarder:
Do we want to support Tor for end users or operators? That might be in areas where they have to use it? Will the community accept removing Tor as an option?
15:16:15
articmine:
Yes but how
15:17:42
articmine:
I have seen discussion over the years where both Tor and clear net are used
15:18:12
datahoarder:
That's relaying txs only. But more important is the ability of a user to hide their Monero usage
15:18:31
datahoarder:
which if you end up using clearnet, you can't, even if using centralized VPNs
15:18:54
articmine:
... and run a full node
15:20:52
datahoarder:
There was discussion on last MRL of research of methods to aggregate individual proofs, then place that in the block, and be able to throw away individual tx proofs. That'd allow semi-pruned block and tx sync (where you only sync pre-aggregated blocks) for way lower bandwidth size
15:21:19
datahoarder:
That'd support bigger blocks (weighted on initial tx size) but easier to sync for limited nodes
15:21:52
datahoarder:
that's in research, and would require a hardfork if it is indeed possible. So that's when raising this limit higher could be considered
15:23:23
articmine:
Yes but all of this requires a fixed limit?
15:24:32
ofrnxmr:xmr.mx:
1. penalty free zone = KB = 300, 500, 750, 1000
15:24:32
ofrnxmr:xmr.mx:
2. STM = blocks = 100, 720
15:24:32
ofrnxmr:xmr.mx:
3. max ST block size = MB = 16, 32
15:24:32
ofrnxmr:xmr.mx:
4. max LTM block growth = multiplier = 1.2, 1.7, 2, neilsons law, moores law
15:24:32
ofrnxmr:xmr.mx:
5. 300 or 500[... more lines follow, see https://mrelay.p2pool.observer/e/0f_z9s0KanFROFdx ]
15:24:44
datahoarder:
We also have a fixed block time, which this takes into account. So indeed it has to be bounded at the moment, I don't know if 32 MiB is the limit to pick here. That's already 24 GiB per day
15:25:04
ofrnxmr:xmr.mx:
@ofrnxmr:xmr.mx: 5 6 7 8 are supposed to be my choices for 1 2 3 4. Matrix changed the numbers on me
15:25:29
datahoarder:
Let's give the assumption that while Tor bandwidth is limited, AI has not consumed all chip production so you can still grab SSDs and HDDs. FCMP++ afaik should make HDD syncing better right?
15:26:07
ofrnxmr:xmr.mx:
I think ginger synced on an HDD
15:26:19
ofrnxmr:xmr.mx:
If so, id say yes
15:26:55
datahoarder:
So high weight spread over long time is less of a problem than the instant peak/max weight for the block that needs to sync right there and then, or you fall behind. And if most of the network is falling behind, you start opening issues like alt blocks being more common. Which causes more sync times as well :)
15:27:41
datahoarder:
@ofrnxmr:xmr.mx: At worst a mixed SSD cache + HDD archive can be used, I guess, if SSDs become limited due to AI bullshit.
15:28:16
boog900:
HDD syncing is weird, even fast syncing is slow when it is not doing any ring lookups
15:28:41
ofrnxmr:xmr.mx:
@boog900: For fcmp as well?
15:28:56
ofrnxmr:xmr.mx:
@gingeropolous:monero.social did you sync fcmp on hdd?
15:29:14
boog900:
Well FCMP wouldn't change anything that would change that
15:29:35
datahoarder:
^ no ring lookups, just key image lookups + layers right?
15:30:02
boog900:
Cuprate is the same FWIW
15:30:34
articmine:
First HDD synchronization is just inflicting oain
15:30:40
boog900:
If I had to guess I would guess LMDB is the bottleneck on a HDD
15:30:52
jeffro256:
Also TXID lookups IIRC. The daemon explicitly checks if a transaction ID is already in the chain before inserting
15:31:14
datahoarder:
What I mean overall, the end critical points for what is an acceptable size is first, what can the software actually manage (stressnet shown some issues there, plus existing limits); second what is a reasonable time to sync the txs making the blocks up over a supported limited connection. If Tor is a supported connection, then [... too long, see https://mrelay.p2pool.observer/e/_MaM980KRlp0MnJ1 ]
15:31:18
articmine:
@boog900: It is. But why HDD?
15:31:29
jeffro256:
Which shouldn't be needed with key image lookups AFAIK but it does it anyways
15:32:15
boog900:
@articmine: Because LMDB uses copy on write btrees and reuses previous pages. Which leads to pages being spread out the database. Again just a guess
15:32:16
datahoarder:
@jeffro256: That can be implemented as a mixed mode, I guess, caches in SSD + HDD for archival bulk data
15:32:39
boog900:
@boog900: On an ssd the read performance makes this fine
15:33:02
articmine:
I mean why use an HDD over SSD?
15:33:13
boog900:
Oh. Cost?
15:33:38
datahoarder:
SSDs are increasing in cost, higher capacity is becoming harder due to AI sucking in all chip production.
15:33:47
datahoarder:
It might be temporary, it might last 4 years.
15:34:05
datahoarder:
So if HDD syncing (for bulk archival) is fine, then that's not a worry
15:34:16
datahoarder:
People can have small SSD + big HDD for full nodes.
15:35:22
articmine:
No wonder it is taking forever to sync
15:36:49
boog900:
Cuprate has a split database so it would be interesting to test with the tapes on a HDD and have LMDB on an SSD
15:36:55
boog900:
I'll try it later
15:36:58
articmine:
I have to disclose my conflict of interest
15:36:58
articmine:
I own a part of a company that sells devices to run Monero nodes on SSD
15:37:42
datahoarder:
@boog900: What about having the tapes on tapes? I have some LTO-8 here locally! Just slightly bad seek times...
15:38:36
articmine:
@datahoarder: Go for broke with punch cards
15:39:59
boog900:
@datahoarder: The interface is generic so I mean if you can fit it to the abstraction you can give it a go lol
15:40:18
articmine:
I still remember the 2MB limit on the University mainframe in 1979
15:41:43
boog900:
@boog900: https://github.com/Cuprate/Tapes/blob/638a528635524fc9eb6e945b8def399c660d856f/src/memory.rs#L22
15:42:23
ofrnxmr:
@articmine: I'd hardly argue that nodo is relevant here. They arent particularly fast. The ssd's are fast, but the processors arent. migrating the mainnet db to fcmp takes 26hrs
15:43:02
datahoarder:
@boog900: looks like an mmap? so it can write anywhere or just append to it (resize + write to the end)
15:49:55
boog900:
@datahoarder: Yeah the 2 currently supported backends are a memory mapped file or just in memory bytes
15:51:12
articmine:
Anyway running Monero on HDDs is not an argument for a hard cap hard fork. We need more than that.
15:51:24
datahoarder:
@articmine: I was saying the opposite...
15:51:59
articmine:
SSD
15:52:10
datahoarder:
That storage shouldn't be the cap, we seem to be fine. The end points are in the previous message
15:52:17
datahoarder:
> <@datahoarder> What I mean overall, the end critical points for what is an acceptable size is first, what can the software actually manage (stressnet shown some issues there, plus existing limits); second what is a reasonable time to sync the txs making the blocks up over a supported limited connection. If Tor is a suppo [... too long, see https://mrelay.p2pool.observer/e/3tHZ980KZngyM19k ]
15:52:17
datahoarder:
^ this one
15:52:59
datahoarder:
It's about what the software can actually handle, and then sync speed/time (ignoring storage backend, just P2P stuff like Tor)
15:56:38
articmine:
Has anyone tried up date hardware?
15:58:07
datahoarder:
Yeah. I have been syncing and working on stressnet with an AMD 9900X3D + various backing NVMe + 128GB ram
15:58:52
articmine:
What kind of issues
15:59:03
datahoarder:
and it was suffering there with the big blocks. But I have to assume not everyone has my specs, but if sync only it should be fine on HDD and lesser CPU/ram for at least what I handled
15:59:39
articmine:
How was it suffering?
15:59:48
datahoarder:
The people who develop the code have raised these issues here, they aren't just FCMP++ specific. So I won't repeat again what they have said here a couple of times. It feels we are going in circles.
16:00:23
articmine:
There are serious software issues I know.
16:00:42
articmine:
I am talking about the hardware
16:00:44
datahoarder:
If the people actually developing the code aren't listened to who has the authority to say that the piece of software is ready for 32, 64, 100 MB or 1 GiB blocks
16:01:03
datahoarder:
I had issues with the software. Not hardware.
16:01:38
articmine:
@datahoarder: THank you
16:02:02
datahoarder:
This is why all the talk is "software isn't ready" is about :)
16:02:21
datahoarder:
Besides the verification time of blocks being quite bad for mining (20+ seconds at times before Monero moves to tip, and as such miners switch to new template)
16:02:42
datahoarder:
But that is AFAIK workable in different ways
16:02:53
datahoarder:
It's my conflict of interest, P2Pool
16:02:55
articmine:
So the primary issue is software
16:03:36
datahoarder:
I was running clearnet. I should test tor for funsies there. I run some nodes on mainnet on Tor + P2Pool seed nodes in Tor, that's why I brought that issue as well
16:03:55
datahoarder:
Knowing Tor limitations that aren't really going away much, though Tor capacity increased
16:09:19
datahoarder:
More things that start breaking as block sizes go up currently
16:10:11
datahoarder:
As also said, there's research that could allow aggregate proofs for blocks (so txs would sync pruned yet still fully verifiable as a set) that can make the limits for constrained P2P be lesser.
16:11:51
datahoarder:
Ofc, that'd require a hardfork, which then could have any limits on block size (which regardless of number, are needed for current technical reasons, as unless a hardfork is had people can perfectly use older Monero releases)
16:12:10
articmine:
Let us consider the 100 MB issue. I see two options
16:12:10
articmine:
1) Fix the software problem
16:12:10
articmine:
2) Place a HF hard cap below 100 MB[... more lines follow, see https://mrelay.p2pool.observer/e/5b2i-M0KWnE0RXRR ]
16:13:59
datahoarder:
It's not even 100 MB, the issues start becoming quite bad well below it.
16:14:02
articmine:
Arguing over the rates of growth of scaling parameters will get us nowhere
16:14:30
articmine:
@datahoarder: I know that is an example
16:14:35
datahoarder:
1. takes time and as said, "as unless a hardfork is had people can perfectly use older Monero releases"
16:14:53
datahoarder:
it should be worked on. when the implementations are ready, bump number up.
16:20:31
articmine:
What I really like about this proposal is that it forces the issue https://github.com/monero-project/research-lab/issues/154
16:21:42
articmine:
It actually may be necessary, that is the sad part.
16:22:26
articmine:
... but it will be controversial.
16:22:58
articmine:
...and not just here in this room
16:23:50
articmine:
In any case I just don't have the time for this.
19:32:33
gingeropolous:
@ofrnxmr:xmr.mx: yeah the dell580s is HDD
23:31:33
rucknium:
@monero.arbo:matrix.org: janowitz says https://github.com/monero-project/meta/issues/1303#issuecomment-3592432820 > <@monero.arbo:matrix.org> it's just going in circles with them refusing to budge, when nobody else is really aligned with how they see things. it's not productive
23:31:33
rucknium:
> I am one of the few being fully with ArticMine.
23:38:53
datahoarder:
@rucknium: Yeah, those numbers listed work well for clearnet. For usage in more restricted applications it'd end up behind Tor, with way more limited bandwidth. Storage part is still true, besides the current 2x-4x increase (quoted data is 2020-2023) https://www.pcgamer.com/hardware/memory/keep-up-to-date-with-the-pc-memory-and-ssd-supply-crisis-as-we-track-prices-and-the-latest-news/
23:39:44
datahoarder:
Note that's for chips, not end products, which will end up slowly rising over years (or maybe not dropping as much). It's still reasonable, even if considering HDDs