On Fri, Mar 15, 2024 at 04:19:21PM +0200, MSavoritias wrote: > By servers I mean a separate machine that is used to run services non > graphically that usually needs to be always online.
No, there are machines that *need* to be online, but there are always human beings that *like* to have machines always online even if they don't need it for GNUnet. > > That social network may involve "server" nodes, that is, nodes that run > > on behalf of their owner on a permanent node. They will end up doing > > server-like jobs without requiring a different protocol or implementation > > than any other participant, and several of them provide redundancy and > > scalability in the case of one-to-many channels. > > And that's where we disagree yeah. > > Requiring separate machines to be bought and maintained, that will always > deliver stuff is a non starter. for accessibility, for equality, for > autonomy and the environment. Did i say require? Some people just happen to do that and it will effect that their social network gets better service, but the system also works without. > I see how it may be beneficial for some abstract "efficiency" scale i just > don't think the tradeoffs are worth it. Point is, the tradeoffs you are probably thinking of aren't there. Federation doesn't work because people are supposed to TRUST some server, be it from their school, their company or their big brother. In our architecture, a "super node" has zero advantages over other nodes - no access to any data - it merely makes things smoother for the ones that are dear to you. So where are the trade-offs? > Actually I was thinking not only that. By splitting it you also don't need > to get everything from one person. same with BitTorrent. so bandwidth is > distributed. Oh well, we already have an implementation for that called gnunet-fs. I wasn't thinking of asynchronous file sharing. > > So you still have plenty of work to do. Why use an old, inefficient and > > innovation-blocking so-called "standard" when it was a wrong choice > > even back in 1999 when it got declared a standard? > > I feel like your subjective feelings get in the way here. I think I provided extensive evidence why the XML syntax is among the worst choices to make and all the argumentation I see against that is subjective feelings by people who invested time in XMPP. > Personally its simple I don't want to reinvent everything. Also our goals > are radically different as I have mentioned. I think bikes with square wheels are fine, I don't want to invent a round wheel! > I don't want scale or servers/nodes. Then you don't want adoption by the human race, just by small groups? > > Federation has failed us big time and it is all the reason why GNUnet > > exists. > By federation i mean that the room is hosted by all participants. We can > call it distributed too. Please use scientific vocabulary. Federation is NOT distributed. Federation is when servers talk to each other and you have to entrust a server to participate. It's the concept GNUnet rejected from the start. > > Well, that's what the social graph is good for. Secushare would like to > > have a distributed social graph, not completely transparent, but such > > that you can tell if a communication going through your node is coming > > from a trustworthy source, even if you don't know exactly who it is. > > > > Spammers then don't have much of a chance because they all come in from > > one single person in the social network which can be pointed out as the > > spam origin and eliminated. > > > > I don't see a need for digging into detailed caps for this if the general > > principle works, but I may be misjudging this. > > > In your example the spammer can contact other people right? That's a > fundamental failure of the current internet architecture. The only way for spammers to exist is to leverage their own social network of friends. They can do that only once, then they are isolated. By logical consequence they will not even attempt to do so, because it sucks to blacklisted by all people you know. And the spammers that you are talking about do not even have the address necessary to reach a recipient, because they can neither be guessed nor enumerated. > Any future architectures should make sure that the spam message doesn't even > reach the recipient to begin with. That's what we've been preaching to the advocates of the broken Internet for years. Glad you arrived to the same conclusions as us.