• 34 Posts
  • 33 Comments
Joined 1 year ago
cake
Cake day: July 3rd, 2023

help-circle
  • I’m not an open source guy - redistribution restrictions (as well as restrictions for corporate and commercial use) are non negotiable for me. You’re welcome to learn from the source code, and anyone is free to fork and make whatever changes they want for personal use.

    The license history for this project goes MIT > PolyForm Strict > Forked PolyForm Strict to explicitly allow changes for personal use (named as the “Komorebi” license as changing the text of PolyForm licenses requires removal of the PolyForm trademark).

    If anyone is interested in the story behind the initial MIT > PolyForm Strict switch, the tl;dr is that I decided to explicitly restrict redistribution after someone did a rename of the project and started selling it on the Windows Store. A lot has happened since then that has changed my views on open source in general.

    non-standard

    OSI licenses are not “standard” by any stretch of the imagination, and I personally don’t want to have anything to do with licenses which would permit the use of my software in the mass murder of children.











  • Thank you! Though please keep in mind this part of the README 😅

    While Satounki is currently in a functional state, there are no documented steps for deployment and I don’t recommend that anyone use this software for anything mission-critical just yet.

    Depending on how badly my current job search goes (lol) I’m hoping to have this in an easily deploy-able format for both NixOS and Kubernetes, but it’s not too difficult to get up and deployed if you follow the development instructions and provision the credentials in the relevant places 🤞


  • tl;dr all the same caveats with self-hosted software apply; don’t do anything you wouldn’t do with a self hosted database or monitoring stack.

    Well the actual rules — who gets access to what

    The rules themselves are the same public rules in the IAM docs on AWS, GCP etc., while the collections of these public rules (eg. the storage_analytics_ro example in the README) defined at the org level will likely be stored in two ways: 1) in a (presumably private) infra-as-code repo most probably using the Terraform provider or a future Pulumi provider, 2) the data store backing the service which I talk about more below.

    “Who received access to what” is something that is tracked in the runtime logs and audit logs, but as this is a temporary elevated access management solution where anyone who is given access to the service can make a request that can be approved or denied, this is not the right place or tool for a general long-lived least-privilege mapping of “this rule => this person/this whole team”.

    where is that stored and how is it secured, to what standards?

    This is largely up to the the team responsible for the implementation and maintenance, just like it would be for a self-hosted monitoring stack like Prom + Grafana or a self-hosted PostgreSQL instance; you can have your data exposed through public IPs, FQDNs and buckets with PostgreSQL or Prom + Grafana, or you can have them completely locked down and only available through a private network, and the same applies with Satounki.

    Is there logging, audit, non-repudiation, tamper-proof, time-stamping etc.

    Yes, yes, yes, yes and yes, though the degree of confidence in each of these depends to some degree on the competence of the people responsible for the implementation and the maintenance of the service as is the case with all things self-hosted.

    If deployed in an organization which doesn’t adhere to at least a basic least-privilege permissions approach, there is nothing stopping a bad internal actor with Administrator permissions wherever this is deployed from opening up the database directly and making whatever malicious changes they want.









  • I wish I had more advice, but I’m in a similar boat, just got laid off earlier this month after being with the same company from Series A in 2018 all the way until today. I’m sending job applications and trying to get interviews, but it’s hard to get past the resume screening stage, even with 8+ years of experience.

    I’ve mainly been working in DevOps/SRE/Platform Infrastructure, but I am also an accomplished developer with a pretty thick portfolio of widely used open source projects, though it doesn’t seem to matter.

    There are so many applicants for every single job now that it feels hopeless, and of course every single opening wants you to waste your time on multiple asinine LeetCode gotcha questions.

    If I lived somewhere with a public health system I’d love to take what money I have saved up and open a traditional middle eastern bakery, but I need to do something that will bring health coverage for myself and my family. Who knows, I might just end up working at Trader Joe’s. 🤷‍♀



  • It’s not exactly a traditional RSS feed, but I run a feed of my highlights on all things related to software development, and I’m an experienced DevOps engineer so a lot of my highlights are coloured by that experience.

    If you come across a highlight that is interesting you can click to go and read the whole source article or comment. You can check out a HTML version before you decide if you wanna subscribe to the RSS feed.










  • The whole point is that you can build a working container image and then ship it to a registry (including private registries) so that your other developers/users/etc don’t have to build them and can just run the existing image.

    Agreed, we still do this in the areas where we use Docker at day job.

    I think the mileage with this approach can vary depending on the languages in use and the velocity of feature iteration (ie. if the company is still tweaking product-market fit, pivoting to a new vertical, etc.).

    I’ve lost count of the number of times where a team decides they need to npm install something with a heavy node-gyp step to build native modules which require yet another obscure system dependency that is not in the base layer. 😅






  • I understood your point, and while there are situations where it can be optional, in a context and scale of hundreds of developers, who mostly don’t have any real docker knowledge, and who work almost exclusively on macOS, let alone enough to set up and maintain alternatives to Docker Desktop, the only practical option becomes to pay the licensing fees to enable the path of least resistance.





  • Hi!

    First I’d like to clarify that I’m not “anti-container/Docker”. 😅

    There is a lot of discussion on this article (with my comments!) going on over at Tildes. I don’t wanna copy-paste everything from there, but I’ll share the first main response I gave to someone who had very similar feedback to kick-start some discussion on those points here as well:

    Some high level points on the “why”:

    • Reproducibility: Docker builds are not reproducible, and especially in a company with more than a handful of developers, it’s nice not to have to worry about a docker build command in the on-boarding docs failing inexplicably (from the POV of the regular joe developer) from one day to the next

    • Cost: Docker licenses for most companies now cost $9/user/month (minimum of 5 seats required) - this is very steep for something that doesn’t guarantee reproducibility and has poor performance to boot (see below)

    • Performance: Docker performance on macOS (and Windows), especially storage mount performance remains poor; this is even more acutely felt when working with languages like Node where the dependencies are file-count heavy. Sure, you could just issue everyone Linux laptops, but these days hiring is hard enough without shooting yourself in the foot by not providing a recent MBP to new devs by default

    I think it’s also worth drawing a line between containers as a local development tool and containers as a deployment artifact, as the above points don’t really apply to the latter.