Skip to content

Commit

Permalink
docs: copy
Browse files Browse the repository at this point in the history
  • Loading branch information
neurosnap committed Dec 5, 2024
1 parent 9941ed2 commit 5f3a2ef
Show file tree
Hide file tree
Showing 7 changed files with 103 additions and 162 deletions.
Original file line number Diff line number Diff line change
@@ -1,8 +1,10 @@
---
title: rfc-7 prose with style
title: prose with style
description: Providing stability through radical experimentation
date: 2024-10-01
tags: [rfc]
tags: [ann]
aliases:
- rfc-007-prose-with-style
---

We just deployed a new feature for `prose` that we think is worth an
Expand Down
22 changes: 14 additions & 8 deletions rfc/rfc-001-radical-experimentation.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,12 @@ date: 2022-01-22
tags: [rfc]
---

| | |
| ---------------- | --------------- |
| **status** | published |
| **last updated** | 2024-12-04 |
| **site** | https://pico.sh |

We want to create and maintain services that we love to use. We also want to
embody a mindset where we rapidly iterate on new ideas and kill ones that are
not serving us.
Expand All @@ -13,17 +19,17 @@ If you didn't already read, we decided to
[shutdown lists.sh](/ann-012-lists-shutdown-notice). This is the first service
that we shared with the world and is also the first service we are shutting
down. Pruning is a critical part of radical experimentation. We cannot allow one
bad apple to spoil the bunch. We could have easily continued to maintain lists
sour apple to spoil the bunch. We could have easily continued to maintain lists
for eternity, but that doesn't serve our mission. It's a weak will to ignore
what is plainly obvious: **no code is sacred at pico.sh.**

We recently adopted a new tagline: **hacker labs**. When I think about what that
means I think about radical experimentation. This necessitates that we create
**and** more importantly, destroy ideas as quickly as possible. When we see
something that isn't working, we need to prune it. Hacker labs requires us to
think about new ideas from first principles, to fundamentally challenge the
status quo, and to be a beacon for like-minded individuals to rally around.
Hacker labs is our stake in the ground: come join us.
We recently adopted a new tagline: **hacker labs**. When we think about what
that means we think about radical experimentation. This necessitates that we
create **and** more importantly, destroy ideas as quickly as possible. When we
see something that isn't working, we need to prune it. Radical experimentation
requires us to think about new ideas from first principles, to fundamentally
challenge the status quo, and to be a beacon for like-minded individuals to
rally around. Radical experimentation is our stake in the ground: come join us.

Our primary directive is to build tools and services that we **need** to use.

Expand Down
82 changes: 31 additions & 51 deletions rfc/rfc-002-rss.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,11 +5,28 @@ date: 2022-08-10
tags: [rfc]
---

| | |
| ---------------- | --------------------- |
| **status** | published |
| **last updated** | 2024-12-04 |
| **site** | https://pico.sh/feeds |

RSS/Atom is a great companion in the smol web. It's relatively standard, easy to
write, easy to consume, and provide users with choice on how to view their
feeds.

I think an RSS service using an SSH app could be useful.
We think an RSS service using an SSH app could be useful to the pico.sh
platform. We like the idea of our notification system be completely opt-in,
including for [pico+](/rfc-004-pico-plus) users. We can also build internal
tooling around RSS. For example, we can monitor all
[new sites on pages](https://pgs.sh/rss). Further this can be just as useful to
feeds outside of pico.

This service can be run with three small go services:

- An SSH app to receive feed files
- A cron job to fetch feeds and send digests
- A web service for keep-alive events

# features

Expand All @@ -18,71 +35,29 @@ I think an RSS service using an SSH app could be useful.
- Ability to upload [opml](https://en.wikipedia.org/wiki/OPML) file
- We would manage fetching feeds and keeping them up-to-date
- We could send an email digest (if they provide their email)
- Provide a web view for the feeds

# what can we offer over the other readers?

We would try to provide a great reading experience from the terminal. No need to
install an RSS reader like newsboat. No need to sync a config file across
multiple apps. Just go to your rss read homepage and start reading. Furthermore,
many of the readers do not provide an rss-to-email feature and most rss-to-email
services do not provide readers so there's an interesting opportunity here to
capture both audiences.

The other nice thing about an RSS reader app is that it ties into our other
services that leverage RSS as well. It's hard to let users know of new features
when they aren't notified about them.

By providing a service that emails users of our services, it would hopefully
improve our communication with our users.

Because the web version doesn't require authentication, anyone could navigate to
any user's feed collection and read its content. This would also provide mobile
support for users since they can just navigate to our website. The only issue is
we might have to deal with content security policy and ensuring we could render
the html content consistently. It definitely opens us open to a bunch of edge
cases. Creating a proxy service might be necessary in that case.

# how it works

A user would `scp` a file containing a lists of rss feeds
A user would copy a file containing a lists of rss feeds.

It doesn't matter how many feed files the user uploads, we would dedupe them
when figuring out how to fetch their feeds. Because an RSS feed can contain a
bunch of metadata about a feed, we should capture as much of that as possible
inside the `posts` table. The downside is we use `posts` for a lot of our
services (e.g. lists, prose, and pastes) so we want to be careful not to
overload this table. Having said that, I think an rss feed fits into the post
paradigm. We just need to add a `data jsonb` column to `posts`.

```sql
ALTER TABLE posts ADD COLUMN data jsonb;
```
It doesn't matter how many feed files the user uploads. Because an RSS feed can
contain a bunch of metadata about a feed, we should capture as much of that as
possible when presenting it to the user.

## fetching

We want to be smart about how we fetch feeds because it could be resource
intensive if the service gets big enough.

What would trigger us fetching feeds?:

- Maybe we just use a cron?
- Prior to sending out daily email digest

Fetching feeds can be a little tricky since some feeds do not provide the html
inside their atom entry. Instead they provide a link for users to click on to
navigate to their site. This kind of defeats the purpose of using RSS so we
could just render the link and force users to open their browser. Or we fetch
the link provided in the atom entry and store the html in our database. This
would probably provide a better user experience but it opens us open to a slew
of edge cases and weird behavior. For now, we are simply showing what we can in
the email and the rest are links to external sites.
For now, we are simply showing what we can in the email and the rest are links
to the originating sites.

## email digest

I also think that if we do send out a daily digest, we add a button in the email
that they need to click within 6 months or else we disable sending them an
email. They click the button in the email -> we delay disabling it for 6 months.
email. They click the button in the email and then we delay disabling it for 6
months.

## tracking feed entries

Expand Down Expand Up @@ -116,3 +91,8 @@ CREATE TABLE IF NOT EXISTS feed_entry (
ON UPDATE CASCADE
);
```

# conclusion

RSS is a standard way to notify users of new content on a site and we see it as
critical to the function of pico.sh.
114 changes: 25 additions & 89 deletions rfc/rfc-003-imgs.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,18 +5,24 @@ date: 2022-08-11
tags: [rfc]
---

The pico team has been thinking about a new image hosting service. We haven't
written a single line of code yet but have spent time thinking about it. This
document serves as our proposal not only for how the service ought to function,
but also details about the technical implementation.
| | |
| ---------------- | ---------------------- |
| **status** | published |
| **last updated** | 2024-12-04 |
| **site** | https://pico.sh/images |

We want to provide an image hosting service.

This document serves as our proposal for how image hosting ought to function and
details about the technical implementation.

# images as a service

It's an image hosting service. Users will be able to upload their images and it
will be instantly publicly sharable, including hotlinking. Further we will
support an image manipulation API which will be awesome for quick tweaks to
width and height ratio, quality, rotation, etc. The intention is to store the
images permanently until service is canceled.
will be publicly sharable, including hotlinking. Further we will support an
image manipulation API which will be awesome for quick tweaks to width and
height ratio, quality, rotation, etc. The intention is to store the images
permanently until service is canceled.

Based on [previous research](https://blog.pico.sh/imgs-market-research), and in
order to stay competitive with other image hosting services, we would need
Expand Down Expand Up @@ -78,97 +84,27 @@ review.

# technical details

I think we should build this to potentially support multi-region. But we would
implement this service similarly to our other services. I think we will be able
to leverage our CMS to handle most of the heavy lifting. Uploading an image
would use `scp` and we would store the image inside the `posts` table.
Uploading an image would use [file uploads](https://pico.sh/file-uploads) and we
would store the image inside the `posts` table.

Then we would build out a web api for retrieving the images.

## third-party services interacting with imgs

Since we have a monorepo setup, we could pretty easily just reach into the code
for `imgs` inside `prose` and perform the necessary operations within `prose`.
For image manipulation, we can use
[imgproxy](https://github.com/imgproxy/imgproxy).

## where do we host the files?

This is tricky. We could store the files to S3 or some other object storage, but
the costs are pretty high. We could store the files directly on our VM FS, but
we'd need to make sure we have enough space and it can scale. We decided to
self-host a [minio](https://github.com/minio/minio) instance for our object
storage service.
We could store the files to S3 or some other object storage, but the costs are
pretty high. We could store the files directly on our VM FS, but we'd need to
make sure we have enough space and it can scale. We decided to self-host a
[minio](https://github.com/minio/minio) instance for our object storage service.

## integration with pico services

The entire point of this service is to enhance our pico services with image
hosting capabilities, so it's critical we figure out the ergonomics of
integration this service with pico.

Ideally, the user would be able to upload images on `prose` and we would reach
out to the `imgs` service to store them. Once the image has been uploaded to
`imgs` any reference to the image would be swapped at runtime inside `prose`.

Let me demonstrate an example workflow inside a `prose` blog:

User's blog folder at `~/blog`:

```bash
blog/
trip-to-paris.jpg # image to upload to imgs
tour-to-paris.md # blog post that contains reference to image
```

Inside `tour-to-paris.md` we would have something like:

```md
---
title: My trip to paris!
---

My trip was great! Here is a pic from my trip

![](/trip-to-paris.jpg)

It's a tourist trap but we couldn't resist checking it out.
```

Once the content is written, the user would upload all files to `prose`:

```bash
scp ~/blog/*.md ~/blog/*.jpg [email protected]:
```

Now when a blog post is requested, we do a few things:

- Find the markdown post
- Scan for relative image urls
- Replace the URL with `imgs.sh` url
- Convert markdown to HTML

Before:

```md
---
title: My trip to paris!
---

My trip was great! Here is a pic from my trip

![](/trip-to-paris.jpg)

It's a tourist trap but we couldn't resist checking it out.
```

After:

```md
---
title: My trip to paris!
---

My trip was great! Here is a pic from my trip

![](https://erock.imgs.sh/trip-to-paris.jpg)

It's a tourist trap but we couldn't resist checking it out.
```
So when you upload an image to [prose](https://prose.sh) or
[pages](https://pgs.sh), we use the same functionality for storing and
manipulating images.
24 changes: 15 additions & 9 deletions rfc/rfc-004-pico-plus.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,26 +5,32 @@ date: 2023-12-31
tags: [rfc]
---

| | |
| ---------------- | -------------------- |
| **status** | published |
| **last updated** | 2024-12-04 |
| **site** | https://pico.sh/plus |

# mission statement

We want to build tools and services that are useful for software development. We
want to empower individual contributors to rapidly prototype and boost
productivity.
productivity. We want to enable homelabs with the ability to host public web
services.

# design goals

- Primary directive is to be useful to ourselves
- It's something we (team pico) need to use
- Developer tools and services
- Focus on individual and small developer teams
- Enable developers to rapidly prototype
- Ability to host small web services
- Authentication with SSH
- Leverage SSH

# what it is not

- Not a PaaS
- Not designed for large organizations
- Not going to provide 99.99% uptime

# services

Expand All @@ -38,14 +44,14 @@ copying files to our SSH app.

## tunnels as a service

Need to access `localhost` from `https`? Not only that, but we also use tunnels
to allow you to connect to all your other containers.
Get automatic TLS for web services hosted locally. This includes tcp,
websockets, and http.

# pricing

$2/mo billed annually.

We would like to keep pricing as simple as possible to reduce overhead. The
current idea is we only offer a yearly subscription service. Ideally we would be
able to charge somewhere around $20/yr, but that might change depending on how
much compute we offer users. I think we could implement a tier pricing model but
that is kind of a pain. It would be better if there was just one single plan
that works for most users.
much compute we offer users.
5 changes: 5 additions & 0 deletions rfc/rfc-005-link-aggregator.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,11 @@ date: 2024-01-20
tags: [rfc]
---

| | |
| ---------------- | ---------- |
| **status** | draft |
| **last updated** | 2024-12-04 |

We want to create a link aggregator service that can only be accessible via SSH.
Think hacker news but authentication and authorization happens via SSH.

Expand Down
Loading

0 comments on commit 5f3a2ef

Please sign in to comment.