• 3 Posts
  • 30 Comments
Joined 4 months ago
cake
Cake day: July 12th, 2024

help-circle


  • Came to second this. I have an old hp Chromebook that is indestructible, has insane battery life, and still has a few years of updates left. The built in Linux terminal is fine and just about anything you can get through apt-get, dpkg, or otherwise works fine as well (if there is an arm version), it’ll even add menu entries for GUI apps.

    I do light reading or dev work on it, and use the built in terminal to keep track of and ssh into my remote boxes. I take it on the road to take notes or hop on a wifi.

    When I first got it the interface was kinda crap for a laptop, but through the updates (dark mode, new menu, etc) it’s actually just fine now.

    It’s slow, low ram and only usable for a few tabs at a time, but for what I use it for it does fine, and it was cheap enough I won’t cry if it dies.



  • Just came here to say you could always look for alternative projects that have this built in as well. I’m not sure what logs you as looking at, but it might be best to contribute or request this feature directly for the software.

    For example I use crowdsec and they have a button on the logs pages that will anonymize the entire page and is great for taking screenshots.

    I agree with another poster that getting something to work with a number of different logs would be a huge undertaking and unrealistic for most solo devs. I do think asking whatever project could be a start. I’d love if journalctl and syslogd etc had a flag to anonymize the log output.

    Personally often times I just open the screenshot in gimp and pixelate out the areas I want hidden, but that’s not an automated solution.




  • h0bbl3s@lemmy.worldtoLinux@lemmy.mllinux as business/ company pc?
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    4 months ago

    I’d suggest one of the fedora atomic installs, maybe even get a couple renewed Thinkpads all set up, one with kde and one with gnome and let them play with them for a few days. I was the only engineer in my company that ran Linux and the bosses only concession was that I carry a windows PC too when he was onsite with me so he’d understand what I was doing, but he provided a nice one for me so I never complained.


  • No offense taken we all have different knowledge and background. I have a general understanding of podman, but now I’m going to go play with it a bit at some point and get more familiar with it.

    Docker is Apache 2.0 licensed. It is open source. Or at least all of the important parts. I’m not sure about docker desktop. It’s partly that I just have a lot of experience with docker, and partly just that it’s what is supported in most projects’ documentation. The fact that a lot of the Linux foundation training uses docker is another reason I’ve got more experience with it.


    As far as what you are talking about people have been trying for years. The Pirate Bay wanted to develop a new method of being entirely decentralized. Odysee is working on something like blockchain/torrents combined that is very interesting. We have I2P and TOR which have some of the features you mention. I’d love to see it happen where the big companies didn’t control things.

    There is progress though. https://letsencrypt.org/ is non-profit, and there are a variety of open source projects using this to automate TLS certificate signing.

    Check out https://www.sigstore.dev/how-it-works and pay special attention to Fulcio and Rekor. It’s not for web certs, but it’s still a very interesting take on a certificate authority.


    There’s no technical reason what you are saying couldn’t work. It just comes down to how do you trust it, and if you can’t at all, it doesn’t do much good anyway. That’s the problem to be solved. You could compromise somewhere in the middle but then you have to work out what is acceptable. I suppose the level of trust could be configurable, with different nodes earning a different level of trust, and you could configure your accepted levels for DNS or CA. It’s an interesting idea.



  • h0bbl3s@lemmy.worldOPtoLinux@lemmy.mlGolang on debian
    link
    fedilink
    arrow-up
    1
    ·
    4 months ago

    Thanks :) Exactly. I do a lot of development and testing in an alpine linux container, simply because it has much newer versions of libraries and musl c. If I can get it to compile there, and on debian, I’m in good shape as far as compatibility goes. I used to really enjoy Arch and the rolling updates when I was younger, but I’ve gotten to where I don’t want to mess with things constantly changing.

    I use python venv for nearly everything I do python, and the way go is setup does make it extremely easy since it uses a per user environment anyway.







  • Nice. I might have to clone that setup for fun. What do you use for CI? I’ve got jenkins running but I’ve been wanting to play with gitlab CI/CD too.

    I do a lot of my dev work in docker containers, simply so I’m in a clean environment. Doesn’t hurt in ease of backup either. No particular reason not to use docker, I also wanted to keep it kind of brief and simple. The guide I originally read that inspired me had a lot of things that were very outdated, and as I worked through getting it working on debian 12 I generally stuck with the source providers instructions when things weren’t already packaged for dpkg, or alternatives were more complex.

    I am currently mulling around doing extensions on this guide and adding links at the bottom, or just extending this one a bit. Also just thinking about writing a guide for other stuff too. I’ve been helping people on discord and irc a bit recently and some of what I know might be useful to someone.

    I don’t know everything by any means far from it, but I’ve been around since my first beOS and slackware installs a long time ago and I’ve picked up a lot. I worked developing and deploying pfsense images for a company years ago and have just had a lot of random experience in linux and bsds over the years.





  • Oh gotcha. It was late when I replied :p. You absolutely get security with a layer of separation from hosting remotely. I monitor my home network and have a similar setup but I don’t host anything from here. I never get attacked or probed at all compared to my remote server. Just having those open ports makes you a target. Once a few scanners pick up on you hosting content you will absolutely start getting attacked. Another benefit is you don’t have to have any passwords on your remote host, just an ssh key. They can bruteforce all they want, good luck without a zero day. You also keep your personal IP address out of peoples scope by not hosting from the local network.

    I used to run much heavier protection on my home network, but after keeping an eye on all the logs and alerts for a while I realized I was just wasting ram and storage space mostly. Sane firewall settings is enough for a typical home, and something like crowdsec is probably overkill.

    Now if you are hosting stuff it’s a different story. I would actually harden my local network MORE than I did the remote one due to much more of my personal stuff being on my local network. My remote host being compromised would be a mild hassle at most, It does self backups once a week, and I have my entire site in a private git repo I sync to. It would take a few minutes to throw up another server, if my home stuff got compromised a lot more damage could be done.