-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix linkcheck fail re duckdns and pi-hole #514
Comments
seems that the pi-hole main website is down, the |
Looks like with the latest PR submission the pi-hole linkcheck fail has resolved itself. See: #515 |
And from the most recent build here on GitHub there were no errors again. But directly there-after, with a local build, I got the duckdns failure again. Just noting as we may just want to drop that link if it's so flaky. |
by |
@Hooverdan96 Re:
Either would do. But ideally we can find a more stable link. Maybe the project is on-the-move. I've not looked at whats going on with it myself. Just wanted to stress that a single flaky link should be dropped as more often than not it's not worth it. Plus we have turned our docs somewhat into a link farm over time. Way too many external links now in my opinion. |
Well, looks like the site is currently down for real. (seems to have happened the last two days). Considering it's run by 2 people (at least how it started) it's impressive that it's still around and free. I guess, the link farm needs to be a balance between convenience of access to external sources and informative (i.e., no need to duplicate details, if they're available somewhere else and continuously updated). I'm ok with removing the link and just mentioning the service. Any search engine will bring it up anyway ... |
@Hooverdan96 Re:
This one is tricky: taken literally we should have no info re btrfs as it is all available upstream. However at one point early on we had one of the only up-to-date & accurate details on the minimum number of drives required for one of the raid levels. @schakrava then submitted a pull request to correct upstreams error/ommision. Plus upstream, in may cases, is often far more expert orientated than we are. I.e. not all projects are appliance based. We for example need not mention many details that would swamp many of our 'appliance' orientated users. So yes, always a balance, and in many cases we can simply reference up-stream in our own appliance orientated 'summaries' and have this covered. One approach is to defer to upstream such as I did in our Tailscale instructions on installing it. We can also indicate canonical references but summaries: as per our btrfs orientated Pools doc entry. Using reductio ad absurdum 'upstream is king for all doc, so why have docs that are more than upstream links' means we need only a link farm: or possibly a link to GitHub from which folks can find their way to all our source and all the source for all the elements of our base OS :). Summaries in simple language help. I think what we need is some kind of link policy for the entire project. Something along the lines of avoiding where possible deep links to projects as they, as we do, moves stuff around. And as you say folks can search. We also need no links that are beyond the bounds of our concern, or overly historical. Rock-ons have links to their upstream. And their docker image. That's good. Rock-on right-ups have links to upstream doc sections. That seems reasonably, but could also be cut as they too move around. Just trying to avoid the gazillion links we are building up. And to defend the reason we summarise. I think we all are on the same page here, but I've definitely fallen down the rabbit hole of linking to very specific pages that may well die at the drop of a commit. @FroggyFlox linkcheck work has really surfaced just how many links we actually use. Tricky as we have a tendency to assume what 'is' will persist. It does not. Ergo I propose we err on the conservative side re deep links, without referencing only GitHub. Which in-turn, may also one-day control traffic to non-paying projects. Our entire endeavour stands on a house of cards. And a gazillion links around the fluid web is not a robust stance. A solid example of a catastrophic docs failure we had when I first took over overall project maintenance. Our docs at that time had a tone of YouTube links. They all died one day as the prior maintainer had due cause to close that account. We now use no YouTube links. That is intentional. I stripped them all out and replaced them all with self-hosted pics. The no YouTube links move (pending in contributor Policy perhaps) also removed a massive burden cookie wise that we placed on all our visitors. With some of those cookies lasting years in some folks default browser setups. We all already know all of the above of course: hence the suggestion re a Link Policy in our existing:
that we can develop in collaboration and incrementally. I'm betting a majority of sites do not do as we do and work on breaking no links: and even then some of my own work here of late has re-arranged sub-headings that could have been linked to externally. So maybe an element of the policy should be to use clean - high-level links only. E.g. my recent changes in the Tailscale install how-to removed some deep links to their docs and replaced them with 'visit main site and select download'. But the day before those changes it was not possible, without unreasonable investigatory time/effort to even find those deeper docs links we used/needed before upstream updated to having Leap 15.6 appropriate links. So super hard to be hard-and-fast here I know. I think, to call out a concrete example of where we are heading in the wrong direction here is the following PR:
We are not the historians of Nginx, nor are we obliged to indicate its non-technical origins. Nor does a systemd explanation belong there: some more context available via my #489 (comment) on that PR. This 'Fix linkcheck fail ...' issue's spin-off discussion re a Link Policy looks to be part of our bit-by-bit approach. As per the above PR, there were technical difficulties in transiting the original content, and we also want to honour contributors work by enacting on it. But not at the cost of becoming a spaghetti doc link farm - with new failing links almost every week :). And as implied, I'm likely one of the main offenders here. It's tempting to link to every upstream website detail there is to help folks, but as you point out: they tend to their clients. We should tend to ours. Ergo we acknowledge upstream and do high-level links only (if possible) and document only that which pertains to our particular use/interface/implementation of their work. Summaries permitted. Their docs, there-after, should suffice. And if they do not we look to removing our resource of their efforts; or contribute upstream to their docs/efforts - which I believe we have all now done already actually. Across a few projects now over the years. |
Certainly a good rule of thumb to follow about the deep links. Though, in your own defense, I actually find it great that you do document and reference quite a few things, especially when important to the core of Rockstor itself, or to support a design decision being made not because the project wants to necessarily, but because it's the pragmatic approach. Now, the example of duckdns could be a tricky one within a policy of "high-level" links, since that is the highest level available on that site. But, a scenario like that could also just be referenced in name, since users are either already aware of the dynamic DNS concept, or if they're new to it, they can start their own in-depth research (which Rockstor for sure would not want to take on as part of the documentation). But, I get your point and in general agree with you. As you've said, it's complicated to make it a hard-and-fast policy because there are enough instances where the deep link is preferable to surface more buried references. |
Oh, w.r.t to the nginx example. I think these tutorial style write-ups could represent an exception, partially because they're community contributions, and partially because they're a good entrypoint for someone using Rockstor that wants to expand their horizon with something new (e.g. nginx) ... so may be still tighten down on the external references, but remain a bit more liberal there ... |
On a recent unrelated doc pull request we have the following linkcheck fails:
[EDIT] We have, as of this edit, now seen also the following fail (local build):
With the first also exhibiting a 502 when run locally.
The text was updated successfully, but these errors were encountered: