walls.corpus

By Nathan L. Walls

Wear a mask in healthcare settings

As mentioned on Twitter, I’m now a kidney transplant recipient. My transplant was in Philadelphia and, a week later, we returned home to Williams Township, PA, about 90 min. away.

Part of my post-transplant follow-up is getting frequent bloodwork to check various values, particularly kidney function and levels of a specific anti-rejection medication. That necessitates going to a local medical facility, having bloodwork drawn, processed and the results sent down to my parts of my medical team that are in Philadelphia.

I did the first of these remote labwork visits on Friday and the difference between Philadelphia and the Lehigh Valley is the prevalence of masks in medical settings. At the hospital in Philadelphia, near universal masking, notionally enforced at hospital entrances. Up here, “Masks recommended, but optional.”

You can probably imagine where this is going.

The next two people in the door to the lab facility within the larger medical complex are both unmasked, one of whom appears to be a medical professional who works within the building. I am now an immunocompromised individual and now I’m hoping neither of these unmasked folks have Covid.

My request is simple. Even if you’ve given up on masking anywhere else, medical facilities aren’t optional for immunocompromised folks. Wear a mask when you’re at a hospital, a lab, or a doctor’s office. You don’t know who else needs to be.

Full stack teams vs. full stack developers

Allen Holub:

The idea of a “full stack developer” is an insane fantasy, given the complexity of modern software. That’s why we have full-stack teams. Every skill needed to get an idea into our customers hands is represented on the team by at least 2 ppl, & everybody teaches everybody else

This is true-ish. But, it’s far from an absolute, and accordingly, the sentiment is incomplete (Twitter’s design inherently limits nuance.). In my mind, the distinction is in where and how teams invest in complexity and specialization.

What do I mean by that? It breaks down a few different ways:

  • Teams with very complex application stacks and deployment mechanisms likely need a lot of internal specialization within the team
  • Teams with simpler application will more likely have “full-stack developers” or, what I think is the more applicable term, generalists
  • Teams can outsource a lot of complexity to limit where they need specialized knowledge
  • Teams frequently insource a lot complexity out of aspiration vs. necessity

Here’s where I see some differences between insourcing and outsourcing from a team. I’ll use a Rails application as an example:

  • If the Rails application is a monolith, using server-side rendered HTML, deployed to Heroku, I think the team can primarily be composed of Rails generalists with some specialization for CSS and HTML and also for Rails performance optimizations.
  • If the app instead uses a heavy front-end JavaScript framework like React, the team will likely need a specialist in JavaScript and React
  • If the app uses other distributed systems within the application or has more complex job processing needs, the team will likely need a specialist in asynchronous job processing, publish/subscribe buses, and job repeatability/idempotency
  • Moving towards further complexity, the application is deployed to a cloud provider with Kubernetes, maybe there’s a discrete database vs. using a hosted solution, and the team is now looking at needing specialists conversant in those topics as well

Particularly in the last case, an on-team specialist may not necessarily be a person responsible for maintaining a consumed service. My own team closely matches that last bullet point, but no one on the team is managing our Kubernetes or Google Cloud infrastructure. Our company’s systems engineering team is directly responsible for those items, but our team-based specialists are familiar with how to successfully use those services.

In the first case, we’re looking at a pretty good range of potential teams. Everything from hobbyist applications to solo or paired founders on up to small companies or well-focused teams in a larger engineering organization.

Other things a team may insource vs. outsource:

  • Continuous integration
  • Code quality metrics, linting, and automated feedback
  • API documentation and publication

The overwhelming issue I see with Holub’s sentiment is the assumption that applications and therefore teams need that complexity and the specialization that comes with it. I agree that some teams do, but I suspect many teams don’t. Accordingly, those teams would be better off not taking on that complexity absent very compelling technical and business reasons for doing so.

The Web has become an awful place to read

Somewhere in the past 10 years or so, the Web has become terrible for reading and readers. We don’t suffer from lack of writing quality, or quantity. Enough lands in my feed reader, via email newsletter, or on Twitter daily such that I will never want for something new to read. My queue of Instapaper articles or saved browser tabs will tell you this has been the case for some time.

No, I mean, the act of reading itself on the Web, in general terms, has gotten worse over time with outright hostility toward readers. The design of news sites, particularly regional and local newspapers that aren’t making the sort of nationwide play for subscribers that The New York Times or The Washington Post are have gotten worse.

I mean the slow accumulation of weight that modern sites have taken on. The sliding interruptions. Paragraphs that shift as you’re reading them because the surrounding ad positions have reloaded and the page text has reflowed. Autoplaying video, with sound, that “helpfully” moves itself to a bottom corner of your browser window as you scroll down the page. The newsletter modal after you’ve scrolled to read the second paragraph. It will offer a “Maybe not right now” passive aggressive dismissal.

It’s the alert message that tells you the site would like to send you notifications when they post new content. “Yes” or “Maybe later”. Never, “No, thank you, do not ask me again.” I’ve already breezed past the persistent cookie banner with several questions. I’ll come back to that.

The newsletter thing bugs me, because I see it so often on sites. Do I want, right now, to interrupt what I was in the middle of doing, to subscribe to a newsletter. Friends, I was in the middle of reading. I have seen it, too, where I go to site, having clicked on a link, from the newsletter, and see the admonishment to subscribe to the newsletter. I think because sites and their advertisers view newsletters as higher quality and thus more valuable than Web traffic. That feels right, but, also seems like punishing a reader instead of rewarded them with fewer obstacles and interruptions for doing exactly what the newsletter asked of them.

What a site could do, should do even, is shift the reader analytics tracking they’re doing for ad placements to smoothing the reader’s path. Skip showing that newsletter reader your subscription call to action.

The cookie banner bugs me severely. Ostensibly, this is a site telling you they’re saving your customizations and want to tailor recommendations to you. I think a lot of that, specifically, is bullshit. Being able to track what you read between different properties, that an ad provider can show you retargeted ads, from site-to-site. That it will do. It will also help feed Google Analytics and whatever profile Facebook has for you and whichever other ad tech vendors a site makes use of.

The analytics practice is a lot. Google gets to know tons about you as a reader. Sites get free analytics tools, but, that’s essentially a side-business for Google in exchange for acquiring more information. I think, too, site managers and owners don’t know how to treat or read the analytic data they have because they keep pivoting to video.

The other regrettable practice that comes out of this is click-chasing by lots of “news” sites rewriting stories from somewhere else. This is completely different in spirit from the early 2000s weblog practice of linking and adding some context or explanation for wonderful things. Jason Kottke still does this, and bless him for it. No, this practice leads to the Rolling Stone rewrite of an incomplete local news story, and then everyone riffing off of that. Rolling Stone and other sites do rewrites like this to attract traffic for ad positions.

Meanwhile, browsers like Firefox, Brave, and Safari have taken to reporting on what invasive techniques they’re blocking from sites. By comparison, Google’s Chrome is still advertiser friendly. It certainly helps Google’s business. Chrome, Firefox and Safari had lots of available ad-blocking plugins. Site ads become more intrusive and annoying. For a time, a lot of sites carried straight-up malware in their ads, because websites outsourced everything about the online advertising side of the. Malware ads carried readers away from the site reading to some tar pit with even worse practices trying to get victimes to install something nefarious or part with banking information.

It got worse. Advertising networks bought the ad blocking browser plugins and turned them into ads. Websites added plugin detectors to block readers from reading if they wouldn’t let the site’s advertising try to grift them.

Reams of first and third party JavaScript power these websites. Generally speaking, the HTML, and text of a website are overly complex, and simultaneously, aren’t all that much. But, ads, and these reams of JavaScript, sometimes video, cost readers in concrete device resources, device memory, CPU, and (potentially capped or otherwise limited) networking bandwidth.

Another contemptible practice is not even loading the whole article at once. Instead, a few paragraphs, separated by ad positions and then some manner of “Click to read more” button. I haven’t figured out the why here. The expensive page-retrieval and load has already occurred. Maybe it’s a manner of protecting against reader modes or other weird robot mitigation. It seems related to a largely disappeared trend to split web articles across multiple pages.

Now, let’s say a reader makes it well into an article, perhaps to the end. I hope that reader is expecting chumboxes, because that’s what’s there. In the past, a news article would have three or six terrible ads for get rich quick or lose weight fast schemes. Now, those lousy and deceptive visual ads will darn near infinitely scroll on some websites. Famous crypto surgeon knows the 30 second routine to clear out your offline wallet. Do this once everyday, and throw away this vegetable.

What I find frustrating is news sites have made putting any sort of news article in context much harder than it ought to be. Stories typically aren’t updated to point at newer, more complete information. It’s rarely possible to know if a particular article contains the most recent information a news organization has. But, somehow, simultaneously, looking through archives in a systematic fashion or even just browsing to past-days news is nigh impossible.

Substantial responsibility for the reading experience decline I’m describing lays at the feet of the news industry through a combination of the collapse of advertising revenue and a disinclination to maintain in-house expertise.

In 1999, at the beginning of my brief reporting and editing career, the newspaper I worked at, the Press-Tribune of Roseville, Calif., did not have an internal library. The company that bought it joined it with several other local newspapers in the foothills and suburbs east of Sacramento and cut staff, including the librarian. They sold off the presses, sublet the now empty half of the offices formerly occupied by the press, and printed the papers from a central facility in Auburn. Our work computers has small hard drives, even for the time. We could only keep two or three editions of the paper’s QuarkXPress files around before our managing editor came around and removed anything but established templates. After the paper went to press, a week or so later, we had no company-maintained record of what we’d printed. This working arrangement was pretty informative for where the rest of the news industry ended up by 2021.

The way this plays out online is archives start off terribly, and get worse over time. Newpapers change online publishing systems and URLs change, and it’s rare for a paper or news site to either redirect old urls or migrate the articles at the old address to a new system. The New York Times_ does it well, as does The Atlantic magazine. Lots of local news sites, however, do not, and our communities, local and online, are poorer for it. The attention sites pay to their site design has declined, apart from finding more opportunities to interrupt readers.

Collectively, reading any manner of long-form content online, particularly news, has gotten worse. The Web could be and should be so much more than it is. Faster pages, easy to find credible content, credible information that’s updated as circumstances warrant and doesn’t mysteriously vanish when a content management system changes. Reading online should not require megabytes of third-party JavaScript, should not require trading privacy and device security, substantial portions of device and networking resources, to read.

Readers deserve better.

2020 books

One of the things I like at the end of each year are seeing folks list of books.

My list for the year is here. My easy favorite was Ta-Nehisi Coates’ Between the World and Me. By subject matter, the book has been timely from publication, I was just late to pulling it off the shelf. As prose, it is one of the most astonishing written works I’ve ever read. The lines filled with spirit, not one wasted. It is a book I expect to return to often.

Group f.64, by Mary Street Alinder has also been out for a bit, but was another favorite. Alinder is Ansel Adams’ former assistant and biographer. Here, she turns her attention to the photographic community and broader spirit Adams helped found. It is deeply researched, taking the better part of two decades of work to get to publication. The endnotes and references are about one third as long as the book itself. Worthwhile.

Elsewhere, here are some end of the year book posts I appreciated:

I’ve started my 2021 books page. I have two books in flight presently, and I’ll add them to the page once I finish the books.

🔗 Links for Dec. 30, 2020

Cryptocurrency Start-Up Underpaid Women and Black Employees, Data Shows

Nathaniel Popper reporting for The New York Times:

The fast-growing cryptocurrency start-up Coinbase has been rattled in recent months by tensions between executives and employees who said they were being treated unfairly because of their race or gender.

While management at the company has argued that the complaints were limited to a handful of employees, Coinbase’s own compensation data suggests that inequitable treatment of women and Black workers went far beyond a few disgruntled workers.

The Coinbase figures arrived at by Ms. Marr took account of the job level of all employees, as well as their status as an engineer and manager. It is possible that if the analysis took account of more factors, the pay disparity would shrink.

In the 14 job categories at Coinbase with at least three women, the average woman earned less than the average man in all but two job categories.

Black employees earned less, on average, than white employees in all but one of the eight job categories that had any Black staff members, the analysis by Ms. Marr shows.

The wage disparities are compounded by the fact that women and Black employees were concentrated in the lower-paying jobs at the company.

It does not surprise me in any way to read this about a company whose CEO made loud noises about staff being “mission-focused” and apolitical at work.

It’s Peak Season for Tamales in Los Angeles

Tejal Rao, reporting for The New York Times:

The Mesoamerican dumpling, made with nixtamalized corn dough and a variety of fillings, has been around for thousands of years. Called tamalli in Nahuatl, a language spoken by Indigenous peoples in Mexico and Central America, it’s still referred to in its singular as a tamal, or tamale.

It can be a source of deliciousness, comfort, cultural connection or income, but the tamal is not a monolith, and there’s no single, correct way to make it.

Dr. Jeremy Littau: “I miss Christmas tamale season in California.”

Maybe You Don’t Need Kubernetes

Matthias Endler:

Kubernetes is the 800-pound gorilla of container orchestration.

It powers some of the biggest deployments worldwide, but it comes with a price tag.

Especially for smaller teams, it can be time-consuming to maintain and has a steep learning curve. For what our team of four wanted to achieve at trivago, it added too much overhead. So we looked into alternatives — and fell in love with Nomad.

We use Kubernetes at work. We’re a decently-sized engineering organization with several teams each supporting two or more applications. The complexities are worthwhile for us since our infrastructure team has a common framework for supporting applications deployed with Kubernetes.

For side projects, or a small shop, I would not start with Kubernetes.

The Frightening State of Security Around NPM Package Management

David Bryant Copeland:

I take GitHub’s new security vulnerability notifications seriously, and try to patch my apps whenever something comes up. I recently had trouble doing so for a JavaScript dependency, and uncovered just how utterly complex management of NPM modules is, and how difficult it must be to manage vulnerable packages. And I’m left wanting. I’m also left more concerned than ever that the excessive use of the NPM ecosystem is risky and dangerous.

The problem stems from three issues, each compounding the other:

  • NPM’s management of transitive dependencies that allows many versions of the same module to be active in one app.
  • Core tooling lacking support to identify and remediate the inclusion if insecure modules.
  • Common use of the same package.json for client and server side bundles.

This is an article from 2019, focused on NPM and JavaScript. More broadly, it’s a reminder to truly own your software dependencies. GitHub’s Dependabot is really helpful for getting automated updates, where they’re possible. It is insufficient to rely on it alone and so, the responsibility remains with project maintainers to stay aware and on top of security updates. Choose the your dependencies conservatively and wisely.

Russia’s SolarWinds Attack

Bruce Schneier:

The US prioritizes and spends many times more on offense than on defensive cybersecurity. In recent years, the NSA has adopted a strategy of “persistent engagement,” sometimes called “defending forward.” The idea is that instead of passively waiting for the enemy to attack our networks and infrastructure, we go on the offensive and disrupt attacks before they get to us. This strategy was credited with foiling a plot by the Russian Internet Research Agency to disrupt the 2018 elections.

But if persistent engagement is so effective, how could it have missed this massive SVR operation? It seems that pretty much the entire US government was unknowingly sending information back to Moscow. If we had been watching everything the Russians were doing, we would have seen some evidence of this. The Russians’ success under the watchful eye of the NSA and US Cyber Command shows that this is a failed approach.

And how did US defensive capability miss this? The only reason we know about this breach is because, earlier this month, the security company FireEye discovered that it had been hacked. During its own audit of its network, it uncovered the Orion vulnerability and alerted the US government. Why don’t organizations like the Departments of State, Treasury and Homeland Wecurity regularly conduct that level of audit on their own systems? The government’s intrusion detection system, Einstein 3, failed here because it doesn’t detect new sophisticated attacks — a deficiency pointed out in 2018 but never fixed. We shouldn’t have to rely on a private cybersecurity company to alert us of a major nation-state attack.

Schneier has the most level-headed, thorough, and considered write-up of the SolarWinds incident that I’ve seen so far.