PM_ME_VINTAGE_30S [he/him]

Anarchist, autistic, engineer, and Certified Professional Life-Regretter. If you got a brick of text, don’t be alarmed; that’s normal.

No, I’m not interested in voting for your candidate.

  • 0 Posts
  • 61 Comments
Joined 1 year ago
cake
Cake day: July 9th, 2023

help-circle

  • Also: model trains. I was into model trains for a few years, but I realized that I didn’t really have the life experience to make a fulfilling model trainset. Like I did the thing, I made a (really childish) layout with some crappy blocks and streets, and I got the trains to move and stuff, but it didn’t…say much? It was “I’m a child and I like trains”, which is great! Probably wouldn’t have become interested in trains at all otherwise!

    But I want more…I always want more. I need to go more hardcore into the few things I can actually tolerate doing…

    And as a child, I saw some really cool trainsets built by adults that told stories, made me laugh, made my parents laugh, made me feel awe at the storytelling and creativity of the craft. Even my cousin, who built a trainset in his basement in his early twenties, had a much more inspired trainset than mine (when I was much younger, like 10 or 12). His trainset was cool. He studied how trains worked, how to make a realistic line with realistic scenery and infrastructure. His trainset reflected who he was, and ultimately forecasted what he became. He literally works for a rail company now designing the train tracks.

    So I’m kinda “saving” that hobby for when I’m in my 60’s after I integrate enough life experience (and hopefully some capital) to build a trainset that really reflects the person I ultimately became.

    My trainset is gonna have a sick, functioning roller coaster, some overly complicated automated control circuits, some heavy metal references, some intentionally goofy shit, serious shit, an anarcho-communist bent, a layout that at least is informed by modern infrastructure design, etc., because that’s at least partially the person I will have become.





  • It’s so hard!

    It’s really hard! But it’s really rewarding too. And as a computing/music student [1], you’re in a great major to start!

    First off, if you just want to make your own effects and you’re not really interested in distributing them or making them public, I recommend using JSFX. It’s way easier. You can read through the entire spec in a night. JSFX support is built into REAPER, and apparently YSFX allows you to load JSFX code into other DAWs, although I haven’t tested it. JSFX plugins are compiled on the fly (unlike VST plugins, which are compiled ahead of time and distributed as DLLs), so you just write them up as text files.

    However, their capabilities are limited compared to VST, AU, LV2, AAX [2], and other similar plugin formats. Also, pre-compiled plugins perform better. That’s why plugins are released as such.

    So if you plan on writing pre-compiled plugins for public consumption, you’ll need to do some C++ programming.


    IMO the most important thing to learn for plugin design is how to code well, particularly in C++ with Git and JUCE.

    If you learn how to code with good practices, you can compensate for all other deficiencies.


    Between “music”, “engineering”, and “software development”, plugin design feels the most like “software development”.

    99.9% of all plugins are written in C++, and most of those are done (both proprietary and FOSS) with the JUCE library. School taught me the basics of C++ but they don’t teach you how to code well. Particularly, your DSP code needs to meet a soft real-time constraint. You have to use multithreading because you have a thread for the audio signal (which must NEVER get interrupted) and at least one thread for the GUI.

    You also need to figure out which parts of the C++ standard library are real-time safe, and which aren’t. Here’s a good talk on that.

    If you use JUCE or a similar development library then they have well-tested basic DSP functions, meaning you can get by without doing all the math from scratch.

    Start watching Audio Developer Conference talks like TV as they come out. JUCE has a tutorial, and MatKat released a video tutorial guiding the viewer through coding a simple EQ plugin [3]. JUCE plugins are basically cross platform, and can typically be compiled as VSTs on Windows, AU plugins on Mac, and LV2 plugins on Linux.

    JUCE is a really complicated library even though it vastly simplifies the process (because audio plugin development is inherently hard!). You’re going to have to learn to read a LOT of documentation and code.

    I also recommend learning as much math as you can stomach. Start with linear algebra, calculus, Fourier analysis, circuit theory, and numerical analysis (especially Padé approximants), in that order. Eventually, you’ll want to roll your own math, or at least do something that JUCE doesn’t provide out the box. Julius O Smith has some really good free online books on filters, Fourier Analysis, and DSP with a music focus.

    If you’re willing to sail the high seas to LibGen buy a book, I recommend Digital Audio Signal Processing by Udo Zolzer for “generic” audio signal processing, and DAFX: Digital Audio Effects by Zolzer for coverage of nonlinear effects, which are typically absent from DSP engineering books. I also recommend keeping a copy of Digital Signal Processing by Proakis and Manolakis on hand because of its detailed coverage of DSP fundamentals, particularly the coverage of filter structures, numerical errors, multirate signal processing, and the Z transform.

    A little bit of knowledge about machine learning and optimization is good too, because sometimes you need to solve an optimization problem to synthesize a filter, or possibly in a fixed time as your actual output (example: pitch shifting). Deep learning is yielding some seriously magical effects, so I do recommend you learn it at your own pace.

    DSP basically requires all the math ever, especially the kind of DSP that we want to do as musicians, so the more you have the better you’ll be.

    [1] IMO that would have been the perfect major for me, that or acoustical engineering, if anything like that existed in my area when I went to recording school 10 years ago. While my recording degree taught me some really valuable stuff, I kinda wish that they pushed us harder into programming, computing, and electronics.

    [2] AAX requires you to pay Avid to develop. So I never use AAX plugins, and I have no intention of supporting the format once I start releasing plugins for public consumption, despite its other technical merits.

    [3] Over half of MatKat’s tutorial is dedicated towards GUI design, i.e. the audio part is basically done but the interface looks boring and default. GUI design and how your GUI (editor component) interacts with the audio processor component are extremely important and time-consuming parts of plugin design. Frankly, GUI design has been by far the most complicated thing to “pick up”, and it’s why I haven’t released anything yet.


  • So I don’t value high fidelity video because I don’t see very well even with glasses, so it wouldn’t make a difference for me.


    I do value high fidelity audio because:

    • I am a musician and producer, although not as much as I used to
    • I have ear training
    • I went to recording school
    • I am autistic with sensitive hearing
    • I have audio and acoustical engineering as special interests
    • I’m doing a master’s degree in electrical engineering where I’ve already designed audio gear for my projects
    • I am teaching myself audio plugin design for fun

    But I simply can’t afford high fidelity gear for every day listening. For my studio monitors, I spent as much as I could to get the best speakers I could afford so that I can be certain that what I’m hearing is an accurate representation of what I “commit to tape”. However, for walking to class or going to the market, I’m not gonna pay for expensive headphones that could get stolen, broken, or lost. It’s impractical.

    My $20 Bluetooth headphones [1] are sufficient for every day carry. They sound “95% of the way there”, they don’t get in the way when I’m walking, and if I lose them, I can have an identical pair delivered to my door with a couple days. 95% is good enough for me. Actually, I could probably settle for less.

    And then there’s storage. My library is already > 110GB in MP3 format, so storing it all in uncompressed formats would be unwieldy.

    So in the rare cases that my listening hardware is insufficient, I’ll usually consult a software equalizer. For example, on Linux, Easy Effects allows me to apply equalizers, dynamic compression, and a bunch of other plugins in LV2 format to the PipeWire output (and input). It’s super convenient for watching YouTube college lectures with questionable microphone quality on my shitty TV speakers. Other than dynamic compression for leveling and an equalizer for frequency effects, I am typically not interested in doing anything else for intelligibility. Said differently, I am not interested in exploiting the nonlinearities in real speaker systems (other than possibly dynamic compression), so I should be able to fix any linear defects (bad frequency response) with a digital equalizer. The nonlinearities in real speaker systems are, for HiFi listening purposes [2], defects.

    Also, I’m extremely skeptical of products marketed towards “audiophiles” because there’s so much marketing bullshit pseudoscience surrounding the field that all the textbooks that cover loudspeaker design and HiFi audio electronics have paragraphs warning about it as the first thing.

    Like I experience the difference between different pairs of binoculars and speakers dramatically, and graphical analysis backs up the differences, so how could they sound/look negligibly different to others?

    Next time you do a graphical analysis, check out the magnitudes of the differences in your graphs versus the magnitude of the Just Noticeable Difference in amplitude or frequency. We probably do experience the differences between speakers differently than others. We’re outliers.

    What’s your take on both major and, at the high end, diminishing returns on higher quality sensory experiences?

    For personal listening, the point of diminishing returns is basically $20 because I can’t afford shit. For listening to something I plan on sharing with others, I’d be willing to put in whatever I can afford. But frankly, I’d be just as likely to straight-up do the math and design my systems myself because I 100% don’t trust any “”“high fidelity”“” system that doesn’t come with a datasheet and frequency response.


    Lastly, I do wear glasses. I typically get my glasses online because, once you have the prescription and your facial measurements, it is the same quality as the stuff you get at the big-box stores.

    [1] I acknowledge that Bluetooth sucks, particularly for audio.

    [2] As a metal guitarist, I’m not against speaker nonlinearity for guitar speakers, but then again, guitar speakers are really convincingly simulated by impulse responses, which are a core linear systems concept, implying that they are nearly linear devices even at the volumes they are typically played at.






  • but channels i watch dont upload to this right?

    I mean if you watch The Linux Experiment then yeah, and only because he chooses to upload to both YouTube and PeerTube (different instance). Generally speaking though, most of my favorite creators don’t upload there. Yet…

    isn’t this a whole different site?

    Correct, and that’s why it’s so important to get creators to put their stuff on PeerTube!

    IMO, PeerTube is the long-term solution to YouTube. Eventually, they will block Invidious, break FreeTube and NewPipe, and generally make YouTube unusable for people who cannot afford to pay and/or give up all privacy to Google. Or, knowing Google, they might just kill it for funzies.

    We need to learn how to use PeerTube, and we should convince creators to upload to PeerTube in addition to YouTube.







  • I gotta warn you, as an autistic person who graduated last year with an engineering degree…shit sucks. Half the applications are fake, half the interviews are fake just to scare the overworked employees. The hiring managers are perfectly willing to waste your fucking time justifying the existence of their jobs. I’ve applied for over 350 jobs and internships and gotten zero offers. Same with my classmates. Expect multiple rounds (3-6, maybe more) before getting an offer.

    And engineering was supposed to be a “safe” degree. I can’t imagine how much harder it is for humanities.

    It’s honestly about who you know, then how wealthy and privileged you already are, if you currently have a job, then how personable you are, then a whole bunch of factors I haven’t been able to identify, and then at the very end, how competent you’ll be in the role.

    Make sure to go to your school’s career fair. Dress up as much as possible and bring ~69 copies of your resumé (yes, around seventy, but I’m a manchild so I actually printed exactly 69 last career fair) and hand them all out to employers you can tolerate working for. Typically, you will be expected to know about their company’s work and what positions they have available. I noticed that a lot of companies are there just for brand recognition, i.e. they’re wasting your time. If they’re not wasting your time, there’s a good chance that the person standing there is either a braindead hiring manager or your direct supervisor, or anything in between. At my college, the companies actually list the positions they’re hiring for. If there are none, I don’t go to that company, because they’re wasting my time or aren’t serious enough to fill out the paperwork.

    If your school publishes the employers who will be at the fair, make sure to scan through the list and target employers you want to talk to. Many employers have long lines, so plan accordingly. As an anarchist, I also do a bit of research on each company to make sure they’re not defense contractors, police collaborators, prison contractors, etc. This eliminates a third to a half of the possible employers at my school.

    Career fairs are, from my experience, emotionally and physically draining events that need several days of preparation to get any benefit, and several days of recovery. They are surprisingly loud (bring inconspicuous headphones or earplugs).

    Make sure you have experience in the field you’re applying to work in, even if (especially if) a job posting says you don’t need it. They’re lying. They’re always lying. They basically don’t want to train you at all. Experience in my field is internships and other free work, or a previous job. Research does not seem to count as experience. I hope your field is different.

    Don’t give out your personal info over email to a job posting. Don’t do email interviews; make sure you see an actual moving human, be it over a video call or in person. Got my identity stolen that way. And don’t work for a company that will make you cash a big check (about $5000, right up to the deposit limit for online banking) for “office supplies”. It’s a scam. However, legitimate companies will also ask you for basically the same information and store it in an equally insecure plain-text database, and you’re expected to provide it.

    For DEI stuff, you can fill it out, or not fill it out, or whatever floats your boat. For example, I fill out that I am Hispanic, but not that I’m autistic. I dunno; I just don’t trust engineers to be cool with an openly autistic person based on literally every engineer or engineering-adjacent person I’ve ever met in person ever.

    Besides letters of recognition, make sure you have people you can use as references who are actually willing to be contacted by phone.

    Technically, you should tweak your resumé for every position. However, because I’m so done with this shit forever, I basically keep a few classes of resumé for different job types. For example, I have a “generic” electrical engineering template, a “control systems” template, and a “data science/software” template. If there’s an opportunity I really want, only then will I tweak it by mirroring the content of the job post. It’s super important for your resumé to be searchable, because the employer is probably going to just do a Ctrl+F to find relevant terms.

    Make sure to also have a plain-text version of your resumé lying around. A common pattern is for the employer to have you upload a copy of your resumé and not even fucking attempt to parse it, meaning that you have to re-enter all its information by hand into their shitty form. Generally speaking, you should be expecting to spend about 15 minutes per application.

    Don’t put absolutely everything on the resumé. You need to leave some stories for interviews.

    Do your phone and Zoom interviews in front of a computer with a text editor open. I actually take notes during and after the interview, and then commit it to a remote repo so I can pull it onto any computer and get all my notes from all phone calls. You should also have a copy of the resumé you actually submitted to the company on hand.

    Technically also you should write cover letters for every position, but again because I’m so fucking done with this bullshit, I rarely do. If I’m feeling like doing a half-measure, this is actually an excellent opportunity to use ChatGPT or an open-source LLM to write for you, of course with proofreading, because this is an application where a bullshit machine IS FUCKING DESERVED actually works since they socially expect bullshit. Not like they’re reading it anyways.

    I’m “pro-work,” if anything. I want a career.

    Can I be honest? I desperately want to work too, but I’m slowly coming to the conclusion that it’s literally easier not to fucking bother and just live off the government, parents, rich friends, and/or stealing. I’m actually a lot worse off than I used to be before studying engineering. I’m overqualified for my old job, but underqualified for engineering and tech work, and all at the price of thousands of fucking dollars of debt. Turns out capitalist “efficiency” is making it harder for us to be put to work.

    Looking for work is a job in it of itself, except you don’t get paid.


  • I love how into this stuff you are.

    Thanks, I wish people around me felt the same way 😂.

    T O A N W O O D Z

    So I actually found an Acoustical Society of America article on wood species for acoustic guitar by a luthier. My favorite quote was:

    Provided the wood does not respond like the proverbial “piece of wet cardboard”, most luthiers can create a respectable instrument from available timber.

    And tbh with enough EQ and compression before the amp I probably can get metal out of a piece of wet cardboard.

    From the conclusion of the paper:

    Specific woods types have specific attributes that make them best suited for making particular guitar components.

    However, the street lore attributing specific types of sound to specific species of a genus is seldom justified.

    Guitars designed to acoustical criteria (rather than dimensional criteria) where the effects of different stiffnesses and densities of species are minimised, sound very similar.

    The residual differences that can be heard may be attributable to the sound spectral absorption and radiation of the particular piece of wood used, a property that is not easily measured and is poorly substituted by the occasional measurement of the damping characteristics of the wood. Once the density and Young’s modulus of particular species is accounted for by careful acoustical design the residual differences are very subtle, yet can be important enough to ensure that some luthiers continue the romantic search for that “holy grail” of woods.

    I believe that some of this discussion should apply to electric guitar. However, unless you are playing basically perfectly clean electric guitar, the wood your guitar is made of is a lot less important than… everything else in the signal chain. However, since wood does affect the guitar’s sensitivity, I could see it affecting how it responds to classic amps with low (relative to modern amps) distortion generated by few gain stages and less filtering, i.e. the playstyle employed by those guitar forum people. However, a much larger factor in your guitar’s sound is…big surprise…all the other choices the luthier made when designing and fabricating your guitar, as well as your pickups and the signal chain you use after the signal leaves the guitar.

    Also since we’re metal players and we’re absolutely destroying the original signal, the type of wood only makes a difference for structural reasons (i.e., not going out of tune, exploding under the pressure, etc.), which can similarly be accounted for by a competent luthier. For example, all of my guitars are uber-cheap, and their necks can be very easily pulled out of tune, because they were not built by competent luthiers. Consequently, the few times I did play live shows, I had to be very careful on stage to not “do stuff my guitar doesn’t like” so it didn’t go out of tune by the end of the song. Good times…

    Creambacks

    So I found a video where Creambacks get compared to a V30. IMO based on that video and forum posts, I would consider a Creamback H-75 over the H-65 or the Neo. H-65 sounded too dark to stand out in a mix, and the Neo sounded like bees and basically nothing like the other two. (If my guitar sounds like bees, I want it to be an effect I can turn off.) However, take it with a grain of salt since mic positions were not the same for each speaker. But also, it depends on your primary use case (recording, bedroom play, playing shows).

    Although honestly, I think 99% of guitar players would get a lot farther investing in a PC with a decent CPU + a decent USB audio interface than buying actual physical amplifiers unless they need to amplify an actual venue [1]. You’d get better sound, more controllable sounds [2], easier recording, and more possibilities by going digital. Also, if you can send guitar into your computer (or run the Effects Send to your interface to test it with your real amp), it would be cheaper to pick up an impulse response of the speaker before committing to buying one. (An impulse response captures the “character” of a speaker + cabinet + power amp assuming it is a linear system. It is a very good approximation, nearly indistinguishable from the real thing. For example, I recorded several IRs of my Vintage 30 and a couple other speakers in my cabinet.)

    [1] Technically you need plugins and DAW software too, but you can 100% use a combination of stock plugins and freeware and get excellent results with practice. The Ardour DAW is free and open-source (but they do charge for pre-compiled Linux binaries, but Linux package managers typically have a version ready-to-go for free), although REAPER is better IMO (not simple, but extremely customizable and stable) and has an infinite, unlimited free trial (and runs on Linux).

    [2] For example, the “clean” channel on the 6505 absolutely sucks, except (ironically) as a rhythm metal channel. If I needed to use both clean and distorted sounds, I would have to use a second amp and an A-B switch. In software, it is absolutely trivial to automate the switch between two (or more) amps (or effects, or whole signal chains). ReaGate, a freeware noise gate plugin that comes with REAPER but anyone can get, includes an adjustable pre-filter so that it only responds to the frequency ranges you expect your guitar to “live in”. It also has a side chain input, meaning you can gate the output signal based on the signal that goes in before the amplifier, like the “four-wire” noise gate setup in an amplifier’s FX loop. This setup means that the amplifiers won’t distort the signal as the gate transitions from on to off, and it also can take care of noise due solely to the distortion stages.


  • That was more signal chain theory that I’ve ever read in one sitting.

    Sorry 😂. Digital signal processing is one of my special interests so I typically go overboard with it.

    • Source of distortion doesn’t really matter, it’s filters
    • TS808 cleans shit up, no tubes. Tubes not necessary and probably dont do much in a pedal anyway

    Yep that’s it.

    • What are you running now for amps and such got get away from the 5150/TS808 combo?

    In the “before times”, I used a TBX150 solid state amp alone, and a Peavey 6505, mostly for recording. The TBX150 is a great amp for modern death metal, but it has a parametric EQ. For me, that’s great, but a lot of guitarists don’t like the metal zone because it has a parametric EQ. For both, I plugged them into a cheap birch Seismic cabinet with a Vintage 30 speaker harvested from a Recto cab. Honestly, the biggest factor in the quality of my guitar recordings was switching to that speaker.

    At this very moment, since my grandmother moved in, I’ve had to forgo amps altogether for simulators. I actually use either a 6505 simulator (Nick Crowe 8505) or a Fender Frontman (yes, that amp, specifically the AXP Softamp plugin) with the mids cranked up and the cabinet impulse thrown out and replaced with a set of impulses I recorded myself from the previously mentioned cabinet.

    The best results I’ve gotten have been with using an EQ before a Boss HM-2 (Buzz Helvetes) set to… however much “HM-2-ness” you want in the EQ depending on what you’re playing, and the smallest possible offset from zero distortion. The pre-EQ is typically a bandpass so I can get more “grinding” and as a cheat for not changing my strings. But, it doesn’t really change the “overall” frequency response of the output of the HM-2, just how the HM-2 “sees” your guitar, so you still get its nastyness. Then the majority of the gain comes from the amp.

    If an HM-2 or Metal Zone is too much, I’ve gotten really “smooth” results with using the ProCo Rat as an overdrive. Note that on a ProCo Rat, the Filter (tone) knob “is backwards”; all the way to the left = minimal filtering.

    My inspiration for this is really the fact that old school death metal was recorded on shitty gear compared to what is available today, and that some of the magic lies in the fact that it sucks in just the right way. Besides At the Gates who used two shitty pedals, Chuck Schuldinger from Death used a shitty Valvestate and got great results. Most of the old school death metal bands were using Valvestates.

    In the past few months, I’ve been experimenting with using the RS-MET tool chain plugin to generate nasty sounding distortion with odd-order Chebyshev polynomials. It initially sounds like a more unhinged Boss HM-2 with no pre or post eq, but since the plugin lets you input the math you want to do, it’s much more controllable. If you use this plugin, you gotta make sure to set the built-in filters to cut off high frequencies that will be aliased, or turn on oversampling, or both. This is included within the plugin, but you have to actually set it. Otherwise, everything just sounds like aliasing, although that’s pretty gnarly too.

    So the short answer is: switch out a tube screamer for some garbage piece of gear, preferably something with a frequency response (loosely “tone”) you like and a “bold” distortion. Then, set the pedal so it is giving the least amount of gain while still exhibiting its nonlinearity (minimum possible distortion), then set the amplifier to give you the rest of your gain and cut through the mix. I cannot stress enough that for metal guitars, particularly recording guitars, you gotta set your knobs so that it sounds good in the mix. If it sounds perfect in the room without the rest of the band, I guarantee you it will sound muddy in the mix.