dotfiles/weechat/logs/python.matrix.wyattjmiller.!lsdcnqmhrvaovzivcy:radicle.community.weechatlog
2021-07-07 22:59:54 -04:00

167 lines
19 KiB
Plaintext
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

2021-06-01 22:30:43 --> wymiller (@wymiller:matrix.wyattjmiller.com) has joined #radicle:general
2021-06-02 02:56:53 &fintohaps My thinking is that all these questions have the same answer: it's all in the size of the network. The more your project is replicated, the higher the probability that a consistent version of the project lives out there. In the worst case scenario your data is always fucked, because it's the worst case. If I only kept my project on my laptop and then my laptop's disk got corrupted then I'd lose my project. But if I join multiple seeds and they replicate my project then I have a higher chance of recovering a version of the project.
2021-06-02 03:02:47 ubi#3948 Right
2021-06-02 03:03:28 ubi#3948 Which reason why we probably push to some remote server, beside the collaborative part of it
2021-06-02 03:06:52 ubi#3948 So, having some details about SLA (just using the terms to referrer about the topic) would be important, as of today, GitLab seems to have 1 master/2 replicas (so 3 seeds), otherwise, it is all good and dandy trusting the seeds until it isn't, and you realized you have lost your data.
2021-06-02 03:08:23 ubi#3948 Personally I ask because I have sitting projects in GitHub and GitLab that I dont have in my local computer, and probably I dont care to have them for the most part, but I dont want to lose them either. So if I would move them to the seed, then I am under the mercy of them
2021-06-02 03:11:42 &fintohaps You're also under the mercy of GitHub and GitLab 🤷‍♂️
2021-06-02 03:13:38 ubi#3948 Sure, but that is why some people pay for signing a legal agreement 😉
2021-06-02 03:14:30 xla Have you read the EULA and TOS of these services?
2021-06-02 03:14:32 ubi#3948 I am with you 100%, but that is not the point, and beside Radicle being decentralized or whatever ... I dont think that avoiding the topic in some some way is a good idea either
2021-06-02 03:14:42 ubi#3948 xla read what I said, I was typing
2021-06-02 03:14:53 xla To clarify looking at the seed as stateless frontends to some HA store is not the right angle.
2021-06-02 03:15:02 ubi#3948 Yes I have read the ToS for our enterprise version of Gitlab
2021-06-02 03:15:13 xla They operate with distinct device keys which is important to maintain the local monorepo.
2021-06-02 03:15:15 ubi#3948 But beside that, that is not the point
2021-06-02 03:16:40 ubi#3948 Which I kind of understand that Redicle doesnt do much related to durability, whatever your disk configuration is, what I am suggesting is to take ownership the topic and answer the concern people have
2021-06-02 03:17:14 ubi#3948 Especially from the seed mode perspective, since data corruption in replication could be the problem ... I am not sure
2021-06-02 03:17:21 ubi#3948 asking for direction, and answers
2021-06-02 03:18:32 xla There have been multiple times now where me and others gave answers to your questions. If you have concerns that should be discussed with a larger audience feel free to start a topic on radicle.community around this.
2021-06-02 03:18:49 ubi#3948 What is the answer?
2021-06-02 03:19:01 ubi#3948 Also, are you getting frustrated by any chance?
2021-06-02 03:20:10 xla Somewhat, because this is going in circles. The answer is that the seed is as durable as the storage that is used, if you find/choose some block storage which has higher guarantees than a single disk that is your durability imrpovement.
2021-06-02 03:20:13 ubi#3948 Just want to make sure I am not burning you out, because from my perspective I feel I havent get a solid answer (probably I miss it) but I rather stop here, and come back to this later
2021-06-02 03:21:05 ubi#3948 That is one part, as I said before, I get it, and I understood already. We are going in loop because that is only one part of the equation
2021-06-02 03:21:35 ubi#3948 Replication is the second part, and making sure that the replication is not corrupt due to some outrage and what not
2021-06-02 03:21:52 ubi#3948 And then, clustering of seeds so you an achieve HA as much as you can
2021-06-02 03:22:32 ubi#3948 So I understand, and I totally get what you mean about the durability, and I said before, I got it, but I am trying to understand the second part of the history, where is where I wish I dont get pull back to the durability topic
2021-06-02 03:22:33 &fintohaps The protocol team's current focus is on getting a solid foundation around the protocol itself. The thought of durability isn't currently a concern as we'll currently rely on the network community for such things.
2021-06-02 03:23:21 &fintohaps > In reply to @_discord_105414642838298624:t2bot.io
2021-06-02 03:23:21 > Replication is the second part, and making sure that the
2021-06-02 03:23:21 > replication is not corrupt due to some outrage and what
2021-06-02 03:23:21 > not
2021-06-02 03:23:21 If you can point to how replication may get corrupted (via git) then maybe the conversation could be more fruitful :)
2021-06-02 03:23:56 ubi#3948 That works for me, as long as you keep it in mind in the roadmap, as I said before as well, I am not looking for things to be done, but at least understand where people are with the topic
2021-06-02 03:24:18 &fintohaps Understandable :)
2021-06-02 03:24:38 &fintohaps We encourage you to think about it too, and as xla pointed out, start a topic in radicle.community
2021-06-02 03:25:29 ubi#3948 I posted some link related to GitLab at scale, and they did have issues with replication (and they do use Git) 😉 so I dont know
2021-06-02 03:26:01 ubi#3948 this is where projects trying to be decentralized with HA endup doing repair phases to figure out such problematic topic
2021-06-02 03:26:03 ubi#3948 Which
2021-06-02 03:26:04 ubi#3948 V2
2021-06-02 03:27:00 xla Can you give an example of a decentralised system which does repair phases?
2021-06-02 03:27:27 &fintohaps > In reply to @_discord_105414642838298624:t2bot.io
2021-06-02 03:27:27 > I posted some link related to GitLab at scale, and they
2021-06-02 03:27:27 > did have issues with replication (and they do use Git) 😉
2021-06-02 03:27:27 > so I dont know
2021-06-02 03:27:27 Can you point out the paragraph that refers to it?
2021-06-02 03:28:53 ubi#3948 %
2021-06-02 03:28:58 ubi#3948 * ^
2021-06-02 03:29:10 ubi#3948 ^
2021-06-02 03:29:49 ubi#3948 Storj, from the top of my head ... or any other storage mechanism that most figure out how to replicate the data with strong guarantee of the durability of the file and what not
2021-06-02 03:30:22 ubi#3948 Going back to what I said multiple, again, I get that it could be part of the storage layer rather than this ...
2021-06-02 03:31:04 xla I pointed that out before, distributed storage systems are not a good mental model to apply here. And a comparative analysis about their properites is moot. Radicle does not implement or rely on a distributed datastore.
2021-06-02 03:31:15 xla As there is no globale coherent dataset.
2021-06-02 03:31:28 ubi#3948 As I said before, please dont pull me into the conversation about it if you feel burn out
2021-06-02 03:31:34 ubi#3948 I got it, and if that is the answer, so be it
2021-06-02 03:38:06 ubi#3948 Something like this, https://docs.gitlab.com/ee/administration/gitaly/praefect.html#replication-factor
2021-06-02 03:38:06 Do Radicle has some threadhold/replication factor or whatever that I could for example trust on it
2021-06-02 03:39:18 ubi#3948 Where they stuff make no sense is that, since it is a p2p there is no single point of failure so the concept of master/replicate is non-existent
2021-06-02 03:40:10 ubi#3948 * Something like this, https://docs.gitlab.com/ee/administration/gitaly/praefect.html#replication-factor
2021-06-02 03:40:10 Do Radicle has some threadhold/replication factor or whatever that I could for example trust on when pushing to the seeds or whatever?
2021-06-02 03:40:59 ubi#3948 The whoe page is an interesting reading btw
2021-06-02 03:43:09 ubi#3948 * The whole page is an interesting reading btw
2021-06-02 03:43:44 &fintohaps > Replication factor is the number of copies Praefect
2021-06-02 03:43:44 > maintains of a given repository. A higher replication
2021-06-02 03:43:44 > factor offers better redundancy and distribution of read
2021-06-02 03:43:44 > workload, but also results in a higher storage cost. By
2021-06-02 03:43:44 > default, Praefect replicates repositories to every storage
2021-06-02 03:43:44 > in a virtual storage.
2021-06-02 03:44:25 &fintohaps This sounds like what I said earlier. The replication factor in Radicle is how many peers you're connected to and how many of those are interested in your project
2021-06-02 03:45:00 &fintohaps I don't see anything about replication corruption there. But I'm only glancing through
2021-06-02 03:49:42 ubi#3948 Right right, I understood that part, and that is where I am focusing on the seed particularly because they must be interested in everything since most likely many peers will be connected to it.
2021-06-02 03:49:42 But what was the guarantee that it did happen, if you read the full article, they still have to check that it did happen and the "replicas" are up-to-date and what not ... which I am not sure what that means in the context of Radicle seeds
2021-06-02 03:50:49 ubi#3948 Which is the part I am trying to figure out at the moment (It seems that GitLab does rely on the storage layer for the guarantee, as Radicle decided to btw )
2021-06-02 03:52:11 &fintohaps Right, at the moment there are no "guarantee" mechanisms
2021-06-02 03:52:20 ubi#3948 So imagine a cluster of 3 seeds and me, that are all the peers, how will Radicle guarantee that everything among us is in a healthy state
2021-06-02 03:52:24 ubi#3948 Which v2
2021-06-02 03:52:50 &fintohaps Gossip is sent and any node which cares about that gossip is expected to replicate it if it's interesting
2021-06-02 03:53:19 &fintohaps "Interesting" meaning that the node tracks that Urn and PeerId
2021-06-02 03:54:00 ubi#3948 Recap
2021-06-02 03:54:00 - Durability Guarantee: Underline storage concern, outside of Radicle scope
2021-06-02 03:54:00 - Replication Factor Guarantee: Future
2021-06-02 03:54:00 Does that make sense?
2021-06-02 03:54:15 ubi#3948 * Recap
2021-06-02 03:54:15 - Durability Guarantee: Underline storage concern, outside of Radicle scope
2021-06-02 03:54:15 - Replication Factor Guarantee: Future, unknown what that means for Radicle
2021-06-02 03:54:15 Does that make sense?
2021-06-02 03:57:32 &fintohaps Ya, I think so. I might phrase it as:
2021-06-02 03:57:32 Replication Factor Guarantee: currently based on the gossip protocol (and its correctness), as well as the connectivity of the node. Room for something like, "nodes ack that they received gossip"
2021-06-02 04:05:50 ubi#3948 Yep, something like that
2021-06-02 04:06:19 ubi#3948 Which honestly, I think this makes sense only in the context of the seed nodes, because those are the one people lean on for replicate their stuff
2021-06-02 04:08:59 &fintohaps Sure ya. Maybe something like, "what SHA do you have for rad:git:hnrkhello?" could help there
2021-06-02 04:10:20 ubi#3948 Now we are talking!
2021-06-02 04:16:10 ubi#3948 Maybe, the links I shared from gitlab give some insights in terms of how they are doing things (open source at the end as well), so I am not sure. I understand their architecture is different, but still, good content.
2021-06-02 10:04:43 --> cloudhead#2904 (@_discord_553305334345629728:t2bot.io) has joined #radicle:general
2021-06-02 10:04:43 cloudhead#2904 If you want an SLA, you can pay someone to run a seed node with an SLA, I don't think that's out of question
2021-06-02 10:05:17 cloudhead#2904 but if you're getting a service for free, you can't expect much of a guarantee, only some probability that your data is safe
2021-06-02 10:06:02 cloudhead#2904 eg. as was said above with replication factor, we can theoretically wait for N announcements back of a ref you pushed, to consider the code probably replicated
2021-06-02 10:06:32 cloudhead#2904 the larger N is, the more likely your data is safe
2021-06-02 10:07:21 cloudhead#2904 but I think eventually it will make sense to run a seed node service with data redundancy and an SLA, and have users who want that pay for it
2021-06-02 10:41:10 ubi#3948 cloudhead did you read this comment? I am not suggesting that people must do anything when I use the term SLA
2021-06-02 10:43:51 ubi#3948 So this is true, but I wasnt expecting anything. I am just trying to figure out where Radicle is and hope that because a real sustitution for most solutions out there
2021-06-02 10:44:35 ubi#3948 * So this is true, but I wasnt expecting anything. I am just trying to figure out where Radicle is and hope that become a real sustitution for most solutions out there
2021-06-04 03:58:21 <-- @jhbruhn:jhbruhn.de (None) has left #radicle:general
2021-06-04 18:05:43 @viraptor1 Something Radicle won't have to worry too much about - fires in datacentres: https://pijul.org/posts/2021-06-03-on-fires/
2021-06-05 00:57:25 l0k18 hey, any go programmers here? I'm just wondering how I tell go about fetching radicle repos?
2021-06-05 01:00:53 @viraptor1 l0k18: do you mean an API for Go to do it programmatically?
2021-06-05 01:01:14 l0k18 I mean, how do I specify the URL in an import statement?
2021-06-05 01:02:42 l0k18 I run goland also, if there is any tips related to that
2021-06-05 01:03:02 @viraptor1 Ah, I see. That's interesting. Do you know if Go uses the git command internally or its own implementation?
2021-06-05 01:03:02 If it goes through the git command, then rad:... identifier could maybe work?
2021-06-05 01:03:36 l0k18 ok, so maybe like: rad://hnrkpmhnpcbw4i1i5qfqy8ioicsdgokmmipko.
2021-06-05 01:03:46 @viraptor1 Exactly
2021-06-05 01:04:42 @viraptor1 I don't think that's been a documented use case yet - may be worth raising an issue to track it since it will affect a few languages. Even if it works and just needs documenting.
2021-06-05 01:05:26 l0k18 > _< ah well, yes, I like this a lot, and I guess there will
2021-06-05 01:05:26 > be ENS and other alt-DNS system support too
2021-06-05 01:05:34 l0k18 * > _< ah well, yes, I like this a lot, and I guess there will
2021-06-05 01:05:34 > be ENS and other alt-DNS system support too
2021-06-05 01:05:50 l0k18 * oh >_< ah well, yes, I like this a lot, and I guess there will be ENS and other alt-DNS system support too
2021-06-05 01:06:30 l0k18 I'll raise the issue... I can't develop on this platform without that working
2021-06-05 01:07:21 l0k18 obviously not necessarily able to do much as far as actual radicle repo project contribution goes... rust! crucifix gesture ;)
2021-06-05 01:07:42 l0k18 well maybe I can help with this integration anyway
2021-06-05 01:09:31 @viraptor1 If any changes need to be made, they would be likely on the dependency downloader side. Git with the rad helper script knows how to deal with those repos already. Unless the downloader uses some independent implementation, it shouldn't be hard to get things to "just work".
2021-06-05 01:09:29 l0k18 awww... damn https://github.com/radicle-dev/radicle-upstream/issues/1469 I didn't know there was any connection between rust crates and npm at all. :(
2021-06-05 01:10:03 l0k18 There was a distributed Git system I encountered previously
2021-06-05 01:10:22 l0k18 just have to remember what the name was, probably the go integration is relevant
2021-06-05 01:10:42 l0k18 ah yes, something connected to ipfs
2021-06-05 01:11:18 l0k18 if nobody's done a Go repo on radicle yet there almost certainly won't be integration yet
2021-06-05 01:11:48 @viraptor1 l0k18: not sure what you mean by connection with creates and npm is this context.
2021-06-05 01:11:48 They don't depend on each other, but upstream is built from electron which needs JS libraries and and from Rust backend which uses creates.
2021-06-05 01:12:04 l0k18 which repo in radicle-dev specifically covers the helper that would understand go's requests?
2021-06-05 01:13:59 l0k18 https://github.com/cryptix/git-remote-ipfs this is a helper for ipfs
2021-06-05 01:17:55 l0k18 I've forked that one there, I guess the theory is that I adapt it to talk to radicle network
2021-06-05 01:18:47 l0k18 i noticed that the GUI uses svelte, which is nice, it makes such fast UIs
2021-06-05 01:19:08 @viraptor1 I would expect it's either in -link or -upstream. Let me see if I can find it
2021-06-05 01:28:40 l0k18 import "hyntfjmdoqzgwdupr9wxybnros4uq8gmmqb1usg74kja6jrqd4pcq1/pokaz/cmd/demo">
2021-06-05 01:29:09 l0k18 *
2021-06-05 01:29:39 l0k18 *
2021-06-05 01:29:39 import "hyntfjmdoqzgwdupr9wxybnros4uq8gmmqb1usg74kja6jrqd4pcq1/pokaz/cmd/demo"
2021-06-05 01:29:39
2021-06-05 01:29:39
2021-06-05 01:29:39
2021-06-05 01:29:57 l0k18 simple, peer id and then monorepo path
2021-06-05 01:30:09 l0k18 @vir
2021-06-05 01:30:30 l0k18 * viraptor: no need to look any further it just works
2021-06-05 01:30:46 l0k18 thanks :)
2021-06-05 01:31:24 l0k18 just an aside, is there a more lightweight client for matrix networks, I have stayed away from matrix mostly because the web client is so heavy
2021-06-05 01:32:08 l0k18 it's gonna be a week before I have a workstation with more than 4gb and the swap party that just went on in my laptop just now was infuriating
2021-06-05 01:39:12 l0k18 fractal looks good
2021-06-05 02:00:11 @viraptor1 For the helper start here https://github.com/radicle-dev/radicle-upstream/blob/master/proxy/api/src/bin/git-remote-rad.rs
2021-06-05 02:03:11 l0k18 it's automatically put in place by radicle-upstream :) works ootb
2021-06-05 02:03:34 @viraptor1 Have you tried the Element client yet? (Not sure how the desktop one behaves, but mobile is good)
2021-06-05 04:40:34 l0k18 as in, system resolver will hear 'some.radicle.url' and go will see this but after the request is made the connection will loop back
2021-06-05 04:40:48 l0k18 kludgy, but it might work
2021-06-05 04:55:57 &xla:radicle.community Even with that you need something that terminates on your custom url.
2021-06-05 04:56:13 &xla:radicle.community Like the gateway for IPFS.
2021-06-05 05:06:57 l0k18 hm well, I'm gonna give up on this for the time being since I can't use it, but if anyone happens to have any ideas I'll lurk here, having functional host-proxying at localhost that doesn't confuse go modules probably is a generalised solution that would equally help with ipfs
2021-06-05 05:40:58 l0k18 ok, https://github.com/radicle-dev/radicle-upstream/blob/master/DEVELOPMENT.md#proxy this seems to indicate to me there is a web server/proxy running somewhere, I just can't spot it using ss -atn
2021-06-05 05:47:46 l0k18
2021-06-05 05:47:46 loki@yoga13:~$ curl 127.0.0.1:17246
2021-06-05 05:47:46 {"message":"Resource not found","variant":"NOT_FOUND"}loki@yoga13:~$
2021-06-05 05:47:46
2021-06-05 05:47:46
2021-06-05 05:47:46 Is this the expected response from radicle's web server?
2021-06-05 05:48:16 l0k18 is there a url that gives some more information than this?
2021-06-05 06:15:28 l0k18 meh, nvm, i thought I may be able to play with this but without support for module imports there is no differentiation aside from the nice GUI between this and IPFS as the back end
2021-06-06 03:22:29 ankushrajput Hi Guys can someone share what price did private investors get their placement in radicle?