Skip to content

📄 Potentially Hosting Important Documents #1

Open
@JFWooten4

Description

@JFWooten4

Great work on finding another link to this document![^fn] Maybe we can look to preserve/host documents like this that have been removed, for redundancy and transparency?

Originally posted by @JamesAlfonse in e9774c2 💜

Great work on finding another link to this document![^fn] Maybe we can look to preserve/host documents like this that have been removed, for redundancy and transparency?

Originally posted by @JamesAlfonse in e9774c2

I've thought about this for a while because a lot of my comments reference public URLs. My thinking in the past is that it's best to cite the original source for the sake of giving them credit. However, there have arisen scenarios where that's not possible due to either proprietary gatekeeping or unreliable hosting.

Proprietary Gatekeeping 🚧

There are a few documents central to the business ownership side of TAD3 which I plan to incorporate into TAR2. I've referenced them in WhyDRS/SEC-Comments#1. Namely, you need to create an account and sign up for a "free trial" before you can download the content, despite the reality that it is distributed under a Creative Commons license, promoting the free sharing thereof.

As a principle, this sort of centralized hosting arrangement only serves to maintain the knowledge disparities present between today's social classes. Given the negligible cost of sharing electronic documents, the only factor I'd consider in their dissemination is the labor to produce them, be it research, experimentation,

However, in this example, the documents in question are historic, legal, or other nonbusiness noncreative works which serve as important roles in telling the history of centralized securities trading markets in the modern context of crypto systems. There is no reason people should need to pay money to access files implicated in either regulation or court cases. By paywalling or otherwise preventing the sharing of this content, we enable a class of bureaucratic lawyers specializing in the trivial retrieval of select data from sources only available by the nature of their privileged siloing and confidentiality requirements inherent in a system where others must pay you to think because you cannot access the source materials they use to form a prudent mind.

Unreliable Hosting

In the example James highlights, I had to cite an Archive.org link because the original document was no longer online. Upon receiving some attention from our investing community, the middleman in question removed the content from their site. Thus, only an archive of the content still hosts the damning PDF—at least online.

I like Archive.org links because they clearly show the source of any content in the upper URL bar. This gives an authoritative, presently-uncorruptable 501(c)(3) nonprofit source of attestation to web integrity. I find this important because the alternative is hosting a PDF on your own website and saying it's "the real thing."

Altering PDFs has been the lowest rung of prior financial crimes for decades, and the threat thereof is one of the largest reasons I walked away from centralized asset management. When the history of your performance relies exclusively on centralized, intermediated trust, it seems relatively impossible to independently succeed. You always require the explicit or implicit approval of a central actor, be it via access to the market or the illustrious social approval of today's elite and wealthy.

Thus, I've stuck to raw sources. This has the added benefit of not exposing myself or an organization to some kind of copyright or takedown threat. Given the magnitude of funds present in our adversaries, such activity seems quite likely in the face of individual outcry. It only takes a few grand in legal work to silence most small investors, and for good reason. There exists practically no reputational damage in thrashing out against a dispersed, unorganized group of very minor threats.

Looking Forward 🌄

But now that we have the nonprofit, we can break down their past scare tactics. Indeed, with our ability to self-host, we don't need to speak in fear of a deplatforming strike. Rather, we can wield our unity as a reflective shield against their threats. Imagine broker XYZ sues the DUNA for hosting a damning file detailing their business practices. What kind of impact would that have on their business, especially in an industry so centered around reputation1?

In ruminating on these thoughts, I asked AI to help with some kind of site design to work around these factors. Namely, hosting (even moderately) large files like a PDF directly in a git repo is not best practice. It's certainly doable and something I've implemented in the past. But, past a reasonable size, it's my understanding that real scaling limitations come into play. And GitHub has its own (soft) limits on excessive file content. Anyways, this implies external hosting of files, which I'm moving away from as I incorporate the podcast and meetings onto IPFS for efficient, decentralized community sharing.

One option copied and pasted from GPT involves pages refreshing to grab the IPFS file hosting hash identifiers. This means that we can set the hash for a document and put it on our computers. Thereafter, you would need to make a change in the repo to a new file hash to change its contents. The code looks like this and notably could be replaced in the repo itself by markdown-less refresh:

addEventListener("fetch", event => {
  event.respondWith(handleRequest(event.request));
});

async function handleRequest(request) {
  const ipfsUrl = "https://ipfs.io/ipfs/<CID>";
  const response = await fetch(ipfsUrl);

  const headers = new Headers(response.headers);
  headers.set("Content-Type", "application/pdf");

  return new Response(await response.blob(), { headers });
}

This has the added benefit of making our (potential) web pages just little snippets of code. If we did receive any kind of takedown notice, past URLs linking to any particular document page could be set up as URL redirects to either an Archive.org link or the source material if available. Thus, there always exists an option to stop "hosting" (by means of running local file sharing) documents by changing the pointer in any such site to a redirect. The original file still exists on all node computers run by community members, but they won't serve web requests anymore if we need to remove something.

What do you think? Should we have some kind of site where community members add important DRS documents for reference in our work across internet platforms? If so, what naming would you like to see for the domain (or its paths)? Choices here naturally ought to be longstanding given the permanence of regulatory filings with web URLs. 🔗 That makes it something I definitely want to have some discussion about beforehand.

Voting link

Since I made this an issue rather than a voteable discussion.

Footnotes

  1. Given its lack of substantive value provided.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    Status

    Backlog

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions