Favor refactoring over rewriting
Sunday, 7 August, 2022 — software
Patrik The Dev, on Twitter:
(W)hat’s your reasoning for PHP? The industry is generally moving away from it (except huge existing codebases) so I wanted to hear your thoughts
Marco Arment on Twitter:
PHP has ALWAYS lived up to its role in my stack as the fast, conservative, ubiquitous platform that just works and needs no server babysitting.
PHP lets me sleep at night, spend time with my family, or go on vacation without worrying that it’ll fail.
It has never failed me.
If your codebase works for you, there are no benefits to trying to rewrite it in a new language or framework.
Scaling horizontally indefinitely would be less expensive than rewriting.
The web app side of the industry has had, for multiple years, a tendency toward rewriting software once a language or framework becomes less popular. Meaningfully, it’s fairer to say, “when a language or framework becomes less meteorically popular.”
A memorable example were teams abandoning Rails applications and reworking them in NodeJS. I think a lot of the growth of front-end frameworks in the last 10 years has similarly encouraged chasing popularity instead of identifying and sticking with something that works.
I started my software career in 2000 and over the past 22 years, I’ve been part of a lot of rewrite projects. Only rarely have any of these projects provided a benefit to the team or organization at large. More frequently, the rewrites were an expensive distraction from operating and enhancing currently working (and generally stable and profitable) software.
There are conditions where rewrites make sense. But, they’re less rewriting the application and more refactoring specific parts of the application to something else. Would I rewrite my Rails application to Go because “Go is more performant”? I would not. I might, however, identify some critical pieces of expensive processing with a well-defined scope that would benefit from a substantial speed increase and that particular piece, I might rewrite. More likely, I would examine my present language and framework for missed opportunities for performance optimization first and then, and only then, would I consider the value of a rewrite.
Again, rewrites are expensive distractions and only in vanishingly rare circumstances are they worthwhile. Refactoring and performance work, however, has higher utility, better odds, and accordingly better payouts.
In summation: Favor refactoring within your application’s existing language and framework before you consider rewriting parts of the application in new languages and frameworks. Favor rewriting select parts of the application over rewriting the application wholesale.
Wired: 'Kids Are Back in Classrooms and Laptops Are Still Spying on Them'
Sunday, 7 August, 2022 — links privacy civics
Pia Ceres, reporting for Wired:
Now that the majority of American students are finally going back to school in-person, the surveillance software that proliferated during the pandemic will stay on their school-issued devices, where it will continue to watch them. According to a report published today from the Center for Democracy and Technology, 89 percent of teachers have said that their schools will continue using student-monitoring software, up 5 percentage points from last year. At the same time, the overturning of Roe v. Wade has led to new concerns about digital surveillance in states that have made abortion care illegal. Proposals targeting LGBTQ youth, such as the Texas governor’s calls to investigate the families of kids seeking gender-affirming care, raise additional worries about how data collected through school-issued devices might be weaponized in September.
The CDT report also reveals how monitoring software can shrink the distance between classrooms and carceral systems. Forty-four percent of teachers reported that at least one student at their school has been contacted by law enforcement as a result of behaviors flagged by the monitoring software. And 37 percent of teachers who say their school uses activity monitoring outside of regular hours report that such alerts are directed to “a third party focused on public safety” (e.g., local police department, immigration enforcement). “Schools have institutionalized and routinized law enforcement’s access to students’ information,” says Elizabeth Laird, the director of equity in civic technology at the CDT.
Schools concerned about keeping students productive and safe from school shootings and other potential harms have installed highly invasive monitoring software on school-owned devices issued to students that makes extraordinary and unproven claims about efficacy.
I get that screens can have tons of distractions and teachers probably need some assistance in keeping students focused, but all of this just seems over-the-top invasive against student privacy, particularly for students who don’t otherwise have their own devices.
The ease and comfort with which kids can get automatically referred to law enforcement is flat out shitty.
Wear a mask in healthcare settings
Saturday, 6 August, 2022 — healthcare transplant
As mentioned on Twitter, I’m now a kidney transplant recipient. My transplant was in Philadelphia and, a week later, we returned home to Williams Township, PA, about 90 min. away.
Part of my post-transplant follow-up is getting frequent bloodwork to check various values, particularly kidney function and levels of a specific anti-rejection medication. That necessitates going to a local medical facility, having bloodwork drawn, processed and the results sent down to my parts of my medical team that are in Philadelphia.
I did the first of these remote labwork visits on Friday and the difference between Philadelphia and the Lehigh Valley is the prevalence of masks in medical settings. At the hospital in Philadelphia, near universal masking, notionally enforced at hospital entrances. Up here, “Masks recommended, but optional.”
You can probably imagine where this is going.
The next two people in the door to the lab facility within the larger medical complex are both unmasked, one of whom appears to be a medical professional who works within the building. I am now an immunocompromised individual and now I’m hoping neither of these unmasked folks have Covid.
My request is simple. Even if you’ve given up on masking anywhere else, medical facilities aren’t optional for immunocompromised folks. Wear a mask when you’re at a hospital, a lab, or a doctor’s office. You don’t know who else needs to be.
Full stack teams vs. full stack developers
Saturday, 6 August, 2022 — software
The idea of a “full stack developer” is an insane fantasy, given the complexity of modern software. That’s why we have full-stack teams. Every skill needed to get an idea into our customers hands is represented on the team by at least 2 ppl, & everybody teaches everybody else
This is true-ish. But, it’s far from an absolute, and accordingly, the sentiment is incomplete (Twitter’s design inherently limits nuance.). In my mind, the distinction is in where and how teams invest in complexity and specialization.
What do I mean by that? It breaks down a few different ways:
- Teams with very complex application stacks and deployment mechanisms likely need a lot of internal specialization within the team
- Teams with simpler application will more likely have “full-stack developers” or, what I think is the more applicable term, generalists
- Teams can outsource a lot of complexity to limit where they need specialized knowledge
- Teams frequently insource a lot complexity out of aspiration vs. necessity
Here’s where I see some differences between insourcing and outsourcing from a team. I’ll use a Rails application as an example:
- If the Rails application is a monolith, using server-side rendered HTML, deployed to Heroku, I think the team can primarily be composed of Rails generalists with some specialization for CSS and HTML and also for Rails performance optimizations.
- If the app uses other distributed systems within the application or has more complex job processing needs, the team will likely need a specialist in asynchronous job processing, publish/subscribe buses, and job repeatability/idempotency
- Moving towards further complexity, the application is deployed to a cloud provider with Kubernetes, maybe there’s a discrete database vs. using a hosted solution, and the team is now looking at needing specialists conversant in those topics as well
Particularly in the last case, an on-team specialist may not necessarily be a person responsible for maintaining a consumed service. My own team closely matches that last bullet point, but no one on the team is managing our Kubernetes or Google Cloud infrastructure. Our company’s systems engineering team is directly responsible for those items, but our team-based specialists are familiar with how to successfully use those services.
In the first case, we’re looking at a pretty good range of potential teams. Everything from hobbyist applications to solo or paired founders on up to small companies or well-focused teams in a larger engineering organization.
Other things a team may insource vs. outsource:
- Continuous integration
- Code quality metrics, linting, and automated feedback
- API documentation and publication
The overwhelming issue I see with Holub’s sentiment is the assumption that applications and therefore teams need that complexity and the specialization that comes with it. I agree that some teams do, but I suspect many teams don’t. Accordingly, those teams would be better off not taking on that complexity absent very compelling technical and business reasons for doing so.
The Web has become an awful place to read
Monday, 27 September, 2021 — web writing
Somewhere in the past 10 years or so, the Web has become terrible for reading and readers. We don’t suffer from lack of writing quality, or quantity. Enough lands in my feed reader, via email newsletter, or on Twitter daily such that I will never want for something new to read. My queue of Instapaper articles or saved browser tabs will tell you this has been the case for some time.
No, I mean, the act of reading itself on the Web, in general terms, has gotten worse over time with outright hostility toward readers. The design of news sites, particularly regional and local newspapers that aren’t making the sort of nationwide play for subscribers that The New York Times or The Washington Post are have gotten worse.
I mean the slow accumulation of weight that modern sites have taken on. The sliding interruptions. Paragraphs that shift as you’re reading them because the surrounding ad positions have reloaded and the page text has reflowed. Autoplaying video, with sound, that “helpfully” moves itself to a bottom corner of your browser window as you scroll down the page. The newsletter modal after you’ve scrolled to read the second paragraph. It will offer a “Maybe not right now” passive aggressive dismissal.
It’s the alert message that tells you the site would like to send you notifications when they post new content. “Yes” or “Maybe later”. Never, “No, thank you, do not ask me again.” I’ve already breezed past the persistent cookie banner with several questions. I’ll come back to that.
The newsletter thing bugs me, because I see it so often on sites. Do I want, right now, to interrupt what I was in the middle of doing, to subscribe to a newsletter. Friends, I was in the middle of reading. I have seen it, too, where I go to site, having clicked on a link, from the newsletter, and see the admonishment to subscribe to the newsletter. I think because sites and their advertisers view newsletters as higher quality and thus more valuable than Web traffic. That feels right, but, also seems like punishing a reader instead of rewarded them with fewer obstacles and interruptions for doing exactly what the newsletter asked of them.
What a site could do, should do even, is shift the reader analytics tracking they’re doing for ad placements to smoothing the reader’s path. Skip showing that newsletter reader your subscription call to action.
The cookie banner bugs me severely. Ostensibly, this is a site telling you they’re saving your customizations and want to tailor recommendations to you. I think a lot of that, specifically, is bullshit. Being able to track what you read between different properties, that an ad provider can show you retargeted ads, from site-to-site. That it will do. It will also help feed Google Analytics and whatever profile Facebook has for you and whichever other ad tech vendors a site makes use of.
The analytics practice is a lot. Google gets to know tons about you as a reader. Sites get free analytics tools, but, that’s essentially a side-business for Google in exchange for acquiring more information. I think, too, site managers and owners don’t know how to treat or read the analytic data they have because they keep pivoting to video.
The other regrettable practice that comes out of this is click-chasing by lots of “news” sites rewriting stories from somewhere else. This is completely different in spirit from the early 2000s weblog practice of linking and adding some context or explanation for wonderful things. Jason Kottke still does this, and bless him for it. No, this practice leads to the Rolling Stone rewrite of an incomplete local news story, and then everyone riffing off of that. Rolling Stone and other sites do rewrites like this to attract traffic for ad positions.
Meanwhile, browsers like Firefox, Brave, and Safari have taken to reporting on what invasive techniques they’re blocking from sites. By comparison, Google’s Chrome is still advertiser friendly. It certainly helps Google’s business. Chrome, Firefox and Safari had lots of available ad-blocking plugins. Site ads become more intrusive and annoying. For a time, a lot of sites carried straight-up malware in their ads, because websites outsourced everything about the online advertising side of the. Malware ads carried readers away from the site reading to some tar pit with even worse practices trying to get victimes to install something nefarious or part with banking information.
It got worse. Advertising networks bought the ad blocking browser plugins and turned them into ads. Websites added plugin detectors to block readers from reading if they wouldn’t let the site’s advertising try to grift them.
Another contemptible practice is not even loading the whole article at once. Instead, a few paragraphs, separated by ad positions and then some manner of “Click to read more” button. I haven’t figured out the why here. The expensive page-retrieval and load has already occurred. Maybe it’s a manner of protecting against reader modes or other weird robot mitigation. It seems related to a largely disappeared trend to split web articles across multiple pages.
Now, let’s say a reader makes it well into an article, perhaps to the end. I hope that reader is expecting chumboxes, because that’s what’s there. In the past, a news article would have three or six terrible ads for get rich quick or lose weight fast schemes. Now, those lousy and deceptive visual ads will darn near infinitely scroll on some websites. Famous crypto surgeon knows the 30 second routine to clear out your offline wallet. Do this once everyday, and throw away this vegetable.
What I find frustrating is news sites have made putting any sort of news article in context much harder than it ought to be. Stories typically aren’t updated to point at newer, more complete information. It’s rarely possible to know if a particular article contains the most recent information a news organization has. But, somehow, simultaneously, looking through archives in a systematic fashion or even just browsing to past-days news is nigh impossible.
Substantial responsibility for the reading experience decline I’m describing lays at the feet of the news industry through a combination of the collapse of advertising revenue and a disinclination to maintain in-house expertise.
In 1999, at the beginning of my brief reporting and editing career, the newspaper I worked at, the Press-Tribune of Roseville, Calif., did not have an internal library. The company that bought it joined it with several other local newspapers in the foothills and suburbs east of Sacramento and cut staff, including the librarian. They sold off the presses, sublet the now empty half of the offices formerly occupied by the press, and printed the papers from a central facility in Auburn. Our work computers has small hard drives, even for the time. We could only keep two or three editions of the paper’s QuarkXPress files around before our managing editor came around and removed anything but established templates. After the paper went to press, a week or so later, we had no company-maintained record of what we’d printed. This working arrangement was pretty informative for where the rest of the news industry ended up by 2021.
The way this plays out online is archives start off terribly, and get worse over time. Newpapers change online publishing systems and URLs change, and it’s rare for a paper or news site to either redirect old urls or migrate the articles at the old address to a new system. The New York Times_ does it well, as does The Atlantic magazine. Lots of local news sites, however, do not, and our communities, local and online, are poorer for it. The attention sites pay to their site design has declined, apart from finding more opportunities to interrupt readers.
Readers deserve better.