A Thought on Black American Culture and the Racial Wealth Gap

I listened to this conversation between Dr. Glenn Loury and Coleman Hughes with great interest.  I found it to be at times thoughtful, challenging, frustrating, and maddeningly incomplete.  One example of the incompleteness, if not flawed nature of the conversation was the discussion of correlation between blood lead levels and levels of crime.  Hughes cited a book titled Lucifer Curves on this subject, but there is additional scholarship that seems to support lead-crime hypothesis (with lead as a contributing, but not the only factor in crime increasing and decreasing).  The incompleteness and the frustration I had with the argument was how quickly Hughes tossed off the assertion that “we’ve already removed lead from fuel”.  Absent from this throwaway line are factors like:

  • Sources of lead well beyond just fuel, including pipes, paint, dust, toys, pottery, imported canned goods, industrial waste, and batteries.
  • In addition to being highly-toxic, there isn’t really a “safe” level of lead exposure.  Damage from lead exposure is cumulative.
There are thousands of communities (not just Flint, Michigan) with lead in older housing stock (in paint dust or other sources).  Fairly often, these homes are where poor people live.  Fairly often, these poor people are black and brown.  And despite Dr. Loury’s stated desire for social remedies to be implemented on individual terms instead of racial ones, arguments about culture have been (and still are) used to stigmatize black and brown people who are poor as lazy and morally inferior in terms rarely applied to white people who are poor.  This “deserving poor” framing makes it that much easier to deny social remedies (and the government funds that enable them) to such communities–including remedies like lead abatement.  It also makes it easier to police such communities disproportionately compared to others, as seen in the case of Freddie Gray (who came from a community in Baltimore with a much higher prevalence of lead poisoning than elsewhere in my home state).  It’s also worth noting that in a previous episode of The Glenn Show, his guest  (Thomas Chatterton Williams) made a point about the history of arguments for class-based social remedies being undermined by racism.

The JavaScript Guide to Clean Code

This is a great presentation with numerous examples of clean JavaScript (and the much worse alternatives). Highly-recommended for refreshing your grasp of JavaScript fundamentals (and building further upon them).


Retroactive Repudiation

Unless you’ve been living under a rock, you’re aware that Virginia’s Democratic Party has been trying and failing to navigate a controversy about blackface because both the governor and the attorney general of the state were found to (or confessed to) having donned blackface in the past.  I mention this only to provide some context for a Coleman Hughes piece I was sent by a conservative friend of mine.  It may indicate something about the quality of the argument that it is written by a black undergraduate philosophy major at Columbia University instead of by any number of black conservatives with more of a track record.  But this 40-something graduate of a state university is going to share some of the ways in which Coleman Hughes’ opinion is not merely weak, but dishonest.

Strike one against this piece: he winds the clock back to last year to remind the reader of Megyn Kelly’s brush with blackface, and the resulting cancellation of her show.  Omitted from Hughes’ history is Megyn Kelly’s long history of racist comments with her prior employer, Fox.  There’s also her regular practice of bringing on racist ex-cop Mark Fuhrman as a guest to comment on various issues.  Despite this history, NBC not only hired her anyway, but gave her the timeslot held by two black anchors (Al Roker and Tamron Hall).  What really prompted NBC to cancel her show was a decision that the blowback they were receiving from their own staff wasn’t worth it, due to the low ratings of her show.

Skipping past Hughes brief history of blackface, we get to the heart of his issue: a proposed zero-tolerance policy toward anyone who has ever worn blackface, and the way in which it would “thin out the supply of reputable public figures rather quickly”.  To make his point, he then rolls out a long list white actors and late-night personalities who have donned blackface for commercials, movies, or TV.  He names a few famous dead actors and actresses for good measure.  That’s strike two against this piece.  It’s a complete dodge of the issue at hand (which is the past racist behavior of elected officials, and what consequences if any should result), and a transparently obvious dig not just at the left, but the “Hollywood left”.

Having chosen Bouie’s argument as representative, Hughes presumes to knock it down with this: “Anyone uncomfortable with the liquidation of much of America’s artistic class should reject the idea of a retroactive zero-tolerance policy toward blackface. Instead, we should take a more measured approach, one that, without minimizing the ugly legacy of minstrelsy, allows a modicum of mercy for the accused and accounts for the intentions of the transgressor.”  We’ll call this strike three, because as with seemingly anything involving redress of harm to black people in this country, the “more measured approach” is already regularly-applied–and the way blackface is treated will be no different.

Even as I write this, Northam remains in office, attempts to shame him into resigning having failed.  He is still deploying what others have called “the Shaggy defense”, saying he wasn’t the person in blackface or the person in the Klan hood and robe.  He’s even attempted to pivot to making some sort of racial reconciliation the theme of his remaining years in office.  That effort is off to a poor start, since he made the obvious mistake of calling enslaved Africans “indentured servants” before being corrected by his black interviewer, Gayle King.  Mark Herring remains in office as well.

A fourth strike for the piece’s omission of Republicans from among the “professionally offended”.  Despite being a party with a candidate for multiple statewide offices in Virginia who campaigned on preserving Confederate monuments, and a state senate majority leader who edited a yearbook at Virginia Military Institute chock full of racist photos and slurs, this same party (to say nothing of the President of the United States) had the audacity to call for the resignations of Northam, Justin Fairfax (the lieutenant governor facing 2 allegations of sexual assault), and Mark Herring (the attorney general).

Hughes quotes Bayard Rustin at length in advocating for his “more measured approach” to those who demonstrated sufficiently poor judgment to think blackface an appropriate thing to do.  Rustin is worth quoting in full here:

“I think the time will come in the future when the Negro will be accepted into the social, economic, and political life of our country when it will no longer be dangerous to do this sort of thing, and then, of course, we would not be opposed to minstrels per se.”

If NBC executives felt sufficiently comfortable with throwing two black anchors out of their time slot in favor of a white woman with a history of making racist comments, how accepted are black people really in the social and economic life of our country?  If voters could replace the country’s only black president with the man who spent a majority of the preceding decade cheerleading for the birther movement, how accepted are black people really in the political life of our country?  With neighborhoods and schools re-segregating and supposedly-liberal northerners fighting the integration of their schools today, it seems we are a long way from a time when we can afford to tee-hee about minstrelsy.


The Virtue Signalers Won’t Change the World

A piece well-worth the time to read, regardless of your ideology.  I disagree with Dr. McWhorter’s characterization of anti-racism as religion (a critique made far more eloquently and convincingly by Dr. Glenn Loury) as overly simplistic.  As the eldest child of immigrants who came to the U.S. from Jamaica in 1969, my chief objection to the piece is my parents’ generation being held up as examples that anti-black racism wasn’t sufficiently onerous to prevent their success.  I see these examples held up often in conservative circles and they never seem to go beneath the surface.

McWhorter (and others) severely underestimate the degree to which having immigrant parents is an advantage–not just in terms of different expectations, worldview, and culture, but because of the absence of baggage tied to the country’s history. The mantra he blithely refers to in the beginning of that paragraph is one I only recall hearing once or twice in the 24 years I lived at home before moving out–and only in jest, not seriously.  Black people whose parents and grandparents were born in the United States almost certainly remember a time when that mantra was also a lived experience.  Interestingly enough, the very piece McWhorter links to says the following:

“While U.S. born blacks have had to battle generations of institutional racism, such as predatory lending, that has put them at a socioeconomic and psychological disadvantage that some immigrants have not experienced in this country.“

That shortcoming aside, McWhorter is making a good faith argument.  His desire is for meaningful action on the part of progressives to improve the lives of black Americans.  While I’m not a fan of the term “virtue signaling” (it’s a pejorative often found in the mouths of conservatives making bad-faith arguments), McWhorter is right in describing what one might call “performative activism” as a dead end.


Ta-Nehisi Coates isn't Voldemort

I'm no Harvard-trained historian like Leah Wright Rigeur (https://www.hks.harvard.edu/faculty/leah-wright-rigueur).  I'm a 40-something black man with a wife and 3-year-old twins who has written software (and led teams that write software) for over 20 years.  That experience, my upbringing in Maryland as the son of two Jamaican parents, my travels to various parts of North American, the Caribbean, western Europe, Scandinavia, and my engagement with black conservatives both in my family and acquaintances online give me a different perspective on the topic of race.  While there are certainly mistakes black liberals make when talking about race (if not some of the same mistakes black conservatives make when talking about race), they aren't the focus of this piece.
I focus here on black conservatives because in the Trump era, I think authentic black conservatives are perhaps the only credible voices remaining for an ideology that has otherwise been thoroughly-discredited by the actions and policies of mainstream conservatives in the form of the Republican Party.  I see merit in their pro-family, pro-faith, pro-entrepreneurial, and fiscally conservative positions.  Like them, I also hope to see a political landscape where both major parties are compelled to seek black votes to retain or gain political power with real policy changes, and I believe a black conservatism divorced from mainstream conservative could be a vehicle for that if presented more effectively.
Ta-Nehisi Coates isn't Voldemort
No one seems to live rent-free in the heads of as many black conservatives as Ta-Nehisi Coates.  I've heard and read criticism of him from Glenn Loury, John McWhorter, Ayaan Hirsi Ali, Coleman Hughes, Kmele Foster, and other black conservatives.  Ali has accused him of spreading racial poison.  McWhorter has on multiple occasions mocked Coates because of how much white liberals seem to like his work.  To me, the criticism comes across as more than a little elitist.  Nearly every critic I've named either attended, currently attends, or teaches at an Ivy League university.  Coates by contrast dropped out of Howard University.  By simultaneously not taking the ideas Coates puts forward seriously enough, and by making their criticism more about him and those of his fans who are white than about the flaws in his ideas, all of these critics do the cause of black conservatism a disservice.  In a recent episode of the Fifth Column podcast that's been making the rounds, the panelists (which included a number of the critics named earlier, plus Thomas Chatterton Williams) only reluctantly brought up Coates' name, and after that skirted around it with various euphemisms as if he were a mythic figure instead of a flesh-and-blood person.
There are substantive grounds on which to challenge the work of Ta-Nehisi Coates.  Having read his essays for a few years now (along with his book Between the World and Me), I think his atheism contributes to making his worldview far too negative.  It also places him outside the traditional of the black Christian church--perhaps the key institution within the black community--which disconnects him from a critical element for understanding how and why the community acts the way it does.  And I say that as someone who respects Coates' work greatly.  Dr. Chidike Okeem has made substantive criticisms of the work (not the man) from a conservative perspective.  Dr. Sandy Darity has also done so.
Coates' black conservative critics would do well to follow the examples of Okeem and Darity.  Conceding that Coates is a talented writer doesn't weaken truly constructive criticism in the least.  Acknowledging that his experience as a child in inner city Baltimore is shared by too many black boys in this country doesn't weaken constructive criticism either.  These concessions and acknowledgements would give the criticism far more credibility than they currently have--or at least help to dispel the impression that the criticisms are due to jealousy or some factor other than fundamental disagreement with Coates' ideas.
Maybe Actually Talk About Race, Instead of Just Dunking on Black People?
It is entirely possible (if not probable) that there are black conservatives having in-depth and nuanced conversations about blackness, whiteness, and "otherness" in America that I've completely missed.  But most of what I see today within mainstream conservatism, black conservatism (both authentic and fake), and from notable figures on the left is dunking on black people.  Glenn Loury at least brings data, and a genuine love for black people to his criticism, but ultimately he's still dunking on black people.  Barack Obama did this at various points throughout his presidency.  Bill Cosby gained even more of a following than he already had for doing this (prior to his downfall for decades of predatory sexual behavior).  Dr. Ben Carson gave speeches on this topic for years before he ran for president and became HUD secretary under Trump.  It seems clear enough that dunking on black people (particularly in front of mostly-white audiences) served the interests of Obama, Cosby, and Carson pretty well (as it did for Bill Clinton during his presidency).  But it definitely didn't help black people.
To be clear, I'm not saying that discussions of out-of-wedlock births, work ethic,  violence, or criminal behavior are somehow out-of-bounds for discussion.  Exercising agency or personal responsibility (whether your reasons are morality or simple pragmatism) can be a key ally in escaping unfavorable circumstances.  I'm saying that isn't where the discussion should end--and too often black conservatives are in agreement with non-black counterparts.
Among black conservatives, there seems to be an impatience with the relative lack of economic progress of black people since the passage of the Civil Rights, Voting Rights, and Fair Housing Acts in 1964, 1965, and 1968.  They can readily cite many statistics measuring the many ways in which black people are trailing their white, Asian, and Hispanic counterparts in the United States.  Too often unexplored are the variety of ways in which the federal government retreated from full enforcement of civil rights, voting rights, and fair housing on behalf of black people in the same way it did after Reconstruction.  Housing segregation, and the ways in which school funding formulas tend to result in segregated and under-resourced schools.  It is one thing to point out the number of decades that have elapsed since the 1960s and the relatively small amount of progress for black people during that time.  But that lack of progress looks a lot different when you realize that the federal government ceased any real enforcement of those laws after a decade (if even that long).

Calling Out Racist Voters Is Satisfying. But It Comes at a Political Cost.

theintercept.com/2018/11/1…

I’m not sure what took so long for the “broad political left” to conclude that Trump is a racist.  Before he even ran for president, there was his cheerleading for the birther conspiracy, his insistence on the guilt of the Central Park Five despite their exoneration by DNA evidence, derogatory comments about Native Americans (back in the 90s when their casinos were competition for his), and being sued (along with his father) by the Department of Justice in the 1970s for discriminating against blacks in housing.  It should also be noted that quite a bit of the so-called mainstream media still uses euphemisms to characterize Trump’s behavior.

From this point forward however, Ms. Gray seems to be playing a game.  On the one hand, there’s a subtle criticism of The Daily Beast for not publishing the full context of Sanders’ remark.  On the other, a concession that “Sanders’s comment didn’t make much sense”.  Finally, she tries to rationalize Sanders’ statement by comparing with those of other politicians.

This game doesn’t work because unlike Gillum, McCain, O’Rourke, or Obama, Sanders gave white voters who didn’t vote for black candidates solely because of their race an excuse.  He renamed their rationale as “[feeling] uncomfortable” when even the author of this piece concedes it is racist by definition.  Sanders engaged in what many in the supposedly-liberal mainstream media have done for the better part of 2 years since Trump’s election–find some reason (any reason at all) for Trump’s victory that didn’t involve racism.  Many column inches were written about economic anxiety, fear of change, change happening too quickly, etc.  How could it be that people who voted for Obama would vote for Trump, some pieces asked (not mentioning, or purposely ignoring the outcome of the midterm election after Obama’s re-election).

To be fair, the author of this piece is correct in describing the ways that racism is exploited by politicians from both parties.  Even in this election cycle, the candidates of color and women who won the nomination of the Democratic party (and the general election as well in some cases) for various seats across the country (New York, Massachusetts, Illinois, and Minnesota are just some examples) often had to beat the party establishment’s choice to do so.

It is one thing to argue that you shouldn’t label people as “deplorables” or “racists” even if they are.  it is quite another to essentially argue that it was ok for Sanders to equivocate because it will gain him necessary white votes in 2020 if and when he runs for president again. The sad truth is that assertions of what type of candidates “the country is ready for” continue to be driven by those with the most backward beliefs and attitudes because too many in the majority who aren’t racist, misogynist, or xenophobic are either silent, or willing to equivocate like Sanders did.  Perhaps the challenges to these old and tired stereotypes need to come from the people instead of politicians.


Thoughts on America’s Need for a Healthy Conservatism

nymag.com/daily/int…

The link above is Andrew Sullivan’s latest diary entry for New York Magazine (his regular gig since “unretiring” from blogging).  Any analysis of this piece must begin with the picture that precedes the first word.  Behind Trump stand Mike Pence, the current vice president, and Paul Ryan, the current speaker of the House and 2012 vice presidential candidate on Mitt Romney’s ticket.  Any column that purports to discuss the need for a healthy conservatism and fails to even name two of Trump’s key enablers–both long-time members of the mainstream conservative movement–is already falling short of its purported goal.  There is no mention of either man in Sullivan’s column.

Instead, Sullivan puts forward the things he is for as a conservative, and references a book by Roger Scruton that he feels defines conservatism well.  While he references a few of the ideals and people I typically hear from other conservative thinkers, what Sullivan ultimately describes as conservatism is ideal mostly disconnected from both history and its current political expression.  He attempts to separate the Republican Party from conservatism as well, as if a philosophy of how a state should function can realistically be separated from a party in power claiming to hold its ideals dear. But the most telling omission from Sullivan’s hagiographic treatment of conservatism is Barry Goldwater.  Only by not mentioning Goldwater at all can Sullivan allow his false equivalency instinct to take over and blame “the left” for putting his (false) ideal of conservatism under siege and resort to the same tired, pejorative use of the term “social justice” too common among conservatives to describe the advocacy efforts of the left for those who are neither white nor male.

After getting in his requisite dig at the left regarding their attacks on conservatism, I find it especially puzzling that Sullivan’s conservatism is supposedly “anguished when the criminal justice system loses legitimacy, because of embedded racism.”  I’ve seen precious little evidence that mainstream conservatism sees the criminal justice system as somehow illegitimate because of its disparate treatment of people of color.  Other conservatives, like Sohrab AhmariErin Dunne, and David French have publicly rethought certain positions regarding the criminal justice system in the aftermath of Botham Shem Jean’s senseless death at the hands of the police.  Sullivan doesn’t do that here.  Meanwhile, New Yorkers of color alone can point to the abuse of Abner Louima by the police (in 1997), the wrongful death of Patrick Dorismond at the hands of police (in 2000), the conditions of Rikers Island, and other aspects of the criminal justice system to question its legitimacy.

So-called mainstream conservatism is deathly ill precisely because it lacks sufficient diversity of race, class, gender, and faith both among its most high-profile advocates and its rank-and-file.  The small number of its advocates who are neither white nor male are given prominence only because they speak in favor of the status quo–not for a genuine equality.

Sullivan is correct in describing the degree to which the GOP is actively destroying what he sees as the tradition of mainstream conservatism.  They believe in tax cuts to the exclusion of all else–including fiscal solvency.  Their deregulatory fervor will result in an environment that will put our health at risk.  Their unstinting support of Trump lays bare the contempt the GOP has for the rule of law–unless it applies to those they dislike.  Which makes it all the more jarring when he writes: “I also believe we need to slow the pace of demographic and cultural change.”  Whether he intends the statement to do so or not, Sullivan gives aid and comfort not just to the immigration restrictionist, but to Stephen Miller and those like him who seek not to slow the pace of immigration but to reverse it.  When Sullivan writes that “the foreign-born population is at a proportion last seen in 1910″, he effectively endorses the arguments of Jeff Sessions, who spoke glowingly of a 1924 immigration law with the express purpose of keeping Asians, Africans, southern Europeans, and eastern Europeans out of the United States.  He can insist all he wants that seeking to slow the pace of immigration “is not inherently racist”, but when his arguments can’t be meaningfully distinguished from those of Jeff Sessions, he ought not be surprised when he’s not treated as arguing in good faith.

There might be something to the Sullivan argument regarding “elite indifference to mass immigration”, were it not for the fact of a Senate immigration passed with 68 votes in the recent past that didn’t become law because the GOP-controlled House refused to take it up (perhaps fearing it would pass).  No doubt some in the GOP want cheap, exploitable labor.  The Democrats may indeed encourage it because they think it will get them votes.  Neither of this changes the necessity of immigrants to our labor force.  There are plenty of difficult jobs Americans don’t want to do that immigrants will do.

The following passage of Sullivan’s latest diary is a fairly tight summary of straw man arguments and falsehoods:

“A nation has to mean something; to survive, it needs a conservative weaving of past, present, and future, as Burke saw it. And you cannot do that if you see this country as a blight on the face of the earth and an instrument of eternal oppression; or if you replace a healthy, self-critical patriotism with an ugly, racist nationalism that aims to restore the very worst of this country’s past, rather than preserve its extraordinary and near-unique achievements.”
There may be some that see the United States of America as a blight and an instrument of “eternal oppression”, but I have my doubts that the majority of the left believes this as he infers.  The idea that this country ever widely believed in a healthy, self-critical patriotism is laughable.  You need only look at school board battles of what to teach our children as history to know that many would prefer to teach them propaganda than the truth. The ugly, racist nationalism to which he refers was never truly past.  It may have retreated into the shadows or underground, but was never entirely gone.  The continuing battle over Confederate monuments and the endurance of the Lost Cause myth should be sufficient evidence of that.

Sullivan’s idea that somehow a healthy conservatism would rescue the United States from the position it finds itself in is wishful thinking.  Without an honest reckoning with Barry Goldwater’s role in shaping what conservatism has become, and how easily the purported conservative party was taken over by Trump, mainstream conservatism will be fully and deservedly discredited.


Nulls Break Polymorphism, Revisited

Steve Smith wrote this post regarding the problem with null about two years ago.  It’s definitely worth reading in full (as is pretty much anything Steve Smith writes).  The post provided code for the implementation of an extension method and a change in the main code that would address null without throwing an exception.  It also mentioned the null object design pattern but didn’t provide a code example.

I took the code from the original post and revised it to use the null object design pattern.  Unlike the original example, my version of the Employee class overrides ToString() instead of using an extension method.  There are almost certainly any number of other tweaks which could be made to the code which I may make later.

Smith’s post links additional material that’s also worth checking out:


Thoughts on the Damore Manifesto

I’ve shared a few articles on Facebook regarding the now infamous “manifesto” (available in full here) written by James Damore.  But I’m (finally) writing my own response to it because being black makes me part of a group even more poorly represented in computer science (to say nothing of other STEM fields) than women (though black women are even less represented in STEM fields).

One of my many disagreements with Damore’s work (beyond its muddled and poorly written argument) is how heavily it leans on citations of very old studies. Even if such old studies were relevant today, more current and relevant data debunks the citations Damore uses. To cite just two examples:

Per these statistics, women are not underrepresented at the undergraduate level in these technical fields and only slightly underrepresented once they enter the workforce.  So how is it that we get to the point where women are so significantly underrepresented in tech?  Multiple recent studies suggest that factors such as isolation, hostile male-dominated work environments, ineffective executive feedback, and a lack of effective sponsors lead women to leave science, engineering and technology fields at double the rate of their male counterparts.  So despite Damore's protestations, women are earning entry-level STEM degrees at roughly the same rate as men and are pushed out.

Particularly in the case of computing, the idea that women are somehow biologically less-suited for software development as a field is proven laughably false by simply looking at the history of computing as a field.  Before computers were electro-mechanical machines, they were actually human beings–often women. The movie Hidden Figures dramatized the role of black women in the early successes of the manned space program, but many women were key to advances in computing both before and after that time.  Women authored foundational work in computerized algebra, wrote the first compiler, were key to the creation of Smalltalk (the first object-oriented programming language), helped pioneer information retrieval and natural language process, and much more.

My second major issue with the paper is its intellectual dishonesty.  The Business Insider piece I linked earlier covers the logical fallacy at the core of Damore’s argument very well.  This brilliant piece by Dr. Cynthia Lee (computer science lecturer at Stanford) does it even better and finally touches directly on the topic I’m headed to next: race.  Dr. Lee notes quite insightfully that Damore’s citations on biological differences don’t extend to summarizing race and IQ studies as an explanation for the lack of black software engineers (either at Google or industry-wide).  I think this was a conscious omission that enabled at least some in the press who you might expect to know better (David Brooks being one prominent example) to defend this memo to the point of saying the CEO should resign.

It is also notable that though Damore claims to “value diversity and inclusion”, he objects to every means that Google has in place to foster them.  His objections to programs that are race or gender-specific struck a particular nerve with me as a University of Maryland graduate who was attending the school when the federal courts ruled the Benjamin Banneker Scholarship could no longer be exclusively for black students.  The University of Maryland had a long history of discrimination against blacks students (including Thurgood Marshall, most famously).  The courts ruled this way despite the specific history of the school (which kept blacks out of the law school until 1935 and the rest of the university until 1954.  In the light of that history, it should not be a surprise that you wouldn’t need an entire hand to count the number of black graduates from the School of Computer, Mathematical and Physical Sciences in the winter of 1996 when I graduated.  There were only 2 or 3 black students, and I was one of them (and I’m not certain the numbers would have improved much with a spring graduation).

It is rather telling how seldom preferences like legacy admissions at elite universities (or the preferential treatment of the children of large donors) are singled out for the level of scrutiny and attack that affirmative action receives.  Damore and others of his ilk who attack such programs never consider how the K-12 education system of the United States, funded by property taxes, locks in the advantages of those who can afford to live in wealthy neighborhoods (and the disadvantages of those who live in poor neighborhoods) as a possible cause for the disparities in educational outcomes.

My third issue with Damore’s memo is the assertion that Google’s hiring practices can effectively lower the bar for “diversity” candidates.  I can say from my personal experience with at least parts of the interviewing processes at Google (as well as other major names in technology like Facebook and Amazon) that the bar to even get past the first round, much less be hired is extremely high.  They were, without question, the most challenging interviews of my career to date (19 years and counting). A related issue with representation (particularly of blacks and Hispanics) at major companies like these is the recruitment pipeline.  Companies (and people who were computer science undergrads with me who happen to be white) often argue that schools aren’t producing enough black and Hispanic computer science graduates.  But very recent data from the Department of Education seems to indicate that there are more such graduates than companies acknowledge. Furthermore, these companies all recruit from the same small pool of exclusive colleges and universities despite the much larger number of schools that turn out high quality computer science graduates on an annual basis (which may explain the multitude of social media apps coming out of Silicon Valley instead of applications that might meaningfully serve a broader demographic).

Finally, as Yonatan Zunger said quite eloquently, Damore appears to not understand engineering.  Nothing of consequence involving software (or a combination of software and hardware) can be built successfully without collaboration.  The larger the project or product, the more necessary collaboration is.  Even the software engineering course that all University of Maryland computer science students take before they graduate requires you to work with a team to successfully complete the course.  Working effectively with others has been vital for every system I’ve been part of delivering, either as a developer, systems analyst, dev lead or manager.

As long as I have worked in the IT industry, regardless of the size of the company, it is still notable when I’m not the only black person on a technology staff.  It is even rarer to see someone who looks like me in a technical leadership or management role (and I’ve been in those roles myself a mere 6 of my 19 years of working).  Damore and others would have us believe that this is somehow the just and natural order of things when nothing could be further from the truth.  If “at-will employment” means anything at all, it appears that Google was within its rights to terminate Damore’s employment if certain elements of his memo violated the company code of conduct.  Whether or not Damore should have been fired will no doubt continue to be debated.  But from my perspective, the ideas in his memo are fairly easily disproven.


Entity Framework Code First to a New Database (Revised Again)

As part of hunting for a new employer (an unfortunate necessity due to layoffs), I’ve been re-acquainting myself with the .NET stack after a couple of years building and managing teams of J2EE developers.  MSDN has a handy article on Entity Framework Code First, but the last update was about a year ago and some of the information hasn’t aged so well.

The first 3 steps in the article went as planned (I’m using Visual Studio 2017 Community Edition).  But once I got to step 4, neither of the suggested locations of the database worked per the instructions.  A quick look in App.config revealed what I was missing:

Once I provided the following value for the server name:

(localhostdb)\mssqllocaldb
database I could connect to revealed themselves and I was able to inspect the schema.  Steps 5-7 worked without modifications as well.  My implementation of the sample diverged slightly from the original in that I refactored the five classes out of Program.cs into separate files.  This didn't change how the program operated at all--it just made for a simpler Program.cs file.  The code is available on GitHub.

Podcast Episodes Worth Hearing

Since I transitioned from a .NET development role into a management role 2 years ago, I hadn’t spent as much time as I used to listening to podcasts like Hanselminutes and .NET Rocks.  My commute took longer than usual today though, so I listened to two Hanselminutes episodes from December 2016.  Both were excellent, so I’m thinking about how to apply what I’ve heard directing an agile team on my current project.

In Hanselminutes episode 556, Scott Hanselman interviews Amir Rajan.  While the term polyglot programmer is hardly new, Rajan’s opinions on what programming languages to try next based on the language you know best were quite interesting.  While my current project is J2EE-based, between the web interface and test automation tools, there are plenty of additional languages that my team and others have to work in (including JavaScript, Ruby, Groovy, and Python).

Hanselminutes episode 559 was an interview with Angie Jones.  I found this episode particularly useful because the teams working on my current project include multiple automation engineers.  Her idea to include automation in the definition of done is an excellent one.  I’ll definitely be sharing her slide deck on this topic with my team and others..


Best Practices for Software Testing

I originally wrote the following as an internal corporate blog post to guide a pair of business analysts responsible for writing and unit testing business rules. The advice below applies pretty well to software testing in general.

80/20 Rule

80% of your test scenarios should cover failure cases, with the other 20% covering success cases.  Too much of testing (unit testing or otherwise) seems to cover the happy path.  A 4:1 ratio of failure case tests to success case tests will result in more durable software.

Boundary/Range Testing

Given a range of valid values for an input, the following tests are strongly recommended:
  • Test of behavior at minimum value in range
  • Test of behavior at maximum value in range
  • Tests outside of valid value range
    • Below minimum value
    • Above maximum value
  • Test of behavior within the range
The following tests roughly conform to the 80/20 rule, and apply to numeric values, dates and times.

Date/Time Testing

Above and beyond the boundary/range testing described above, the testing of dates creates a need to test how code handles different orderings of those values relative to each other.  For example, if a method has a start and end date as inputs, you should test to make sure that the code responds with some sort of error if the start date is later than the end date.  If a method has start and end times as inputs for the same day, the code should respond with an error if the start time is later than the end time.  Testing of date or date/time-sensitive code must include an abstraction to represent current date and time as a value (or values) you choose, rather than the current system date and time.  Otherwise, you'll have no way to test code that should only be executed years in the future.

Boolean Testing

Given that a boolean value is either true or false, testing code that takes a boolean as an input seems quite simple.  But if a method has multiple inputs that can be true or false, testing that the right behavior occurs for every possible combination of those values becomes less trivial.  Combine that with the possibility of a null value, or multiple null values being provided (as described in the next section) and comprehensive testing of a method with boolean inputs becomes even harder.

Null Testing

It is very important to test how a method behaves when it receives null values instead of valid data.  The method under test should fail in graceful way instead of crashing or displaying cryptic error messages to the user.

Arrange-Act-Assert

Arrange-Act-Assert is the organizing principle to follow when developing unit tests.  Arrange refers to the work your test should do first in order to set up any necessary data, creation of supporting objects, etc.  Act refers to executing the scenario you wish to test.  Assert refers to verifying that the outcome you expect is the same as the actual outcome.  A test should have just one assert.  The rationale for this relates to the Single Responsibility Principle.  That principles states that a class should have one, and only one, reason to change.  As I apply that to testing, a unit test should test only one thing so that the reason for failure is clear if and when that happens as a result of subsequent code changes.  This approach implies a large number of small, targeted tests, the majority of which should cover failure scenarios as indicated by the 80/20 Rule defined earlier.

Test-First Development & Refactoring

This approach to development is best visually explained by this diagram.  The key thing to understand is that a test that fails must be written before the code that makes the test pass.  This approach ensures that test is good enough to catch any failures introduced by subsequent code changes.  This approach applies not just to new development, but to refactoring as well.  This means, if you plan to make a change that you know will result in broken tests, break the tests first.  This way, when your changes are complete, the tests will be green again and you'll know your work is done.  You can find an excellent blog post on the subject of test-driven development by Bob Martin here.

Other Resources

I first learned about Arrange-Act-Assert for unit test organization from reading The Art of Unit Testing by Roy Osherove.  He's on Twitter as @RoyOsherove.  While it's not just about testing, Clean Code (by Bob Martin) is one of those books you should own and read regularly if you make your living writing software.

Software Development Roles: Lead versus Manager

I’ve held the title of development lead and development manager at different points in my technology career. With the benefit of hindsight, one of the roles advertised and titled as the latter was actually the former. One key difference between the two roles boils down to how much of your time you spend writing code. If you spend half or more your time writing code, you’re a lead, even if your business cards have “manager” somewhere in the title. If you spend significantly less than half your time writing code, then the “manager” in your title is true to your role. When I compare my experience between the two organizations, the one that treats development lead and development manager as distinct roles with different responsibilities has been not only been a better work environment for me personally, but has been more successful at consistently delivering software that works as advertised.

A company can have any number of motivations for giving management responsibilities to lead developers. The organization may believe that a single person can be effective both in managing people and in delivering production code. They may have a corporate culture where only minimal amount of management is needed and developers are self-directed. Perhaps their implementation of a flat organizational structure means that developers take on multiple tasks beyond development (not uncommon in startup environments). If a reasonably-sized and established company gives lead and management responsibilities to an individual developer or developers however, it is also possible that there are budgetary motivations for that decision. The budgetary motivation doesn’t make a company bad (they’re in business to make money after all). It is a factor worth considering when deciding whether or not a company is good for you and your career goals.

Being a good lead developer is hard. In addition to consistently delivering high-quality code, you need to be a good example and mentor to less-senior developers. A good lead developer is a skilled troubleshooter (and guide to other team members in the resolution of technical problems). Depending on the organization, they may hold significant responsibility for application architecture. Being a good development manager is also hard. Beyond the reporting tasks that are part of every management role, they’re often responsible for removing any obstacles that are slowing or preventing the development team from doing work. They also structure work and assign it in a way that contributes to timely delivery of functionality. The best development managers play an active role in the professional growth of developers on their team, along with annual reviews. Placing the responsibility for these two challenging roles on a single person creates a role that is incredibly demanding and stressful. Unless you are superhuman, sooner or later your code quality, your effectiveness as a manager, or both will suffer. That outcome isn’t good for you, your direct reports, or the company you work for.

So, if you’re in the market for a new career opportunity, understand what you’re looking for. If a development lead position is what you want, scrutinize the job description. Ask the sort of questions that will make clear that a role being offered is truly a development lead position. If you desire a development management position, look at the job description. If hands-on development is half the role or more, it’s really a development lead position. If you’re indeed superhuman (or feel the experience is too valuable to pass up), go for it. Just be aware of the size of the challenge you’re taking on and the distinct possibility of burnout. If you’re already in a job that was advertised as a management position but is actually a lead position, learn to delegate. This will prove especially challenging if you’re a skilled enough developer to have landed a lead role, but allowing individual team members to take on larger roles in development will create the bandwidth you need to spend time on the management aspects of your job. Finally, if you’re an employer staffing up a new development team or re-organizing existing technology staff, ensure the job descriptions for development lead and development manager are separate. Whatever your software product, the end result will be better if you take this approach.



Security Breaches and Two-Factor Authentication

It seems the news has been rife with stories of security breaches lately.  As a past and present federal contractor, the OPM breach impacted me directly.  That and one other breach impacted my current client.  The lessons I took from these and earlier breaches were:

  1. Use a password manager
  2. Enable 2-factor authentication wherever it's offered
To implement lesson 1, I use 1Password.  It runs on every platform I use (Mac OS X, iOS and Windows), and has browser plug-ins for the browsers I use most (Chrome, Safari, IE).  Using the passwords 1Password generates means I no longer commit the cardinal security sin of reusing passwords across multiple sites.  Another nice feature specific to 1Password is Watchtower.  If a site where you have a username and password is compromised, the software will indicate that site is vulnerable so you know to change your password.  1Password even has a feature to flag sites with the Heartbleed vulnerability.

The availability of two-factor authentication has been growing (somewhat unevenly, but any growth is good), but it wasn’t until I responded to a tweet from @felixsalmon asking about two-factor authentication that I discovered how loosely some people define two-factor authentication.  According to this New York Times interactive piece, most U.S. banks offer two-factor authentication.  That statement can only be true if “two-factor” is defined as “any item in addition to a password”.  By that loose standard, most banks do offer two-factor authentication because the majority of them will prompt you for an additional piece of “out of wallet” information if you attempt to log in from a device with an IP address they don’t recognize.  Such out-of-wallet information could be a parent’s middle name, your favorite food, the name of your first pet, or some other piece of information that only you know.  While it’s better than nothing, I don’t consider it true two-factor authentication because:

  1. Out-of-wallet information has to be stored
  2. The out-of-wallet information might be stored in plain-text
  3. Even if out-of-wallet information is stored hashed, hashed & salted, or encrypted with one bank, there's no guarantee that's true everywhere the information is stored (credit bureaus, health insurers, other financial institutions you have relationships with, etc)
One of the things that seems clear after the Get Transcript breach at IRS is that the thieves had access to the out-of-wallet information of their victims, either because they purchased the information, stole it, or found it on social media sites they used.

True two-factor authentication requires a time-limited, randomly-generated piece of additional information that must be provided along with a username and password to gain access to a system.  Authentication applications like the ones provided by Google or Authy provide a token (a 6-digit number) that is valid for 30-60 seconds.  Some systems provide this token via SMS so a specific application isn’t required.  By this measure, the number of banks and financial institutions that support is quite a bit smaller.  One of the other responses to the @felixsalmon tweet was this helpful URL: https://twofactorauth.org/.  The list covers a lot of ground, including domain registrars and cryptocurrencies, but might not cover the specific companies and financial institutions you work with.  In my case, the only financial institution I currently work with that offers true two-factor authentication is my credit union–Tower Federal Credit Union.  Hopefully every financial institution and company that holds our personal information will follow suit soon.


Bulging Laptop Battery

Until yesterday, I’d been unaware that laptop batteries could fail in a way other than not holding a charge very well. According to the nice fellow at an Apple Genius Bar near my office, this happens occasionally.  I wish I’d been aware of it sooner, so I might have gotten it replaced before AppleCare expired.  When I did some googling, “occasionally” turned out to be a lot more often than I expected.  Half-an-hour (and $129 later), a replacement battery made everything better.  The battery had expanded to the point that it was pushing on the trackpad and making it difficult to click–in addition to preventing the laptop from sitting flush on flat surfaces.  Now that it has a fresh battery (and even though it’s only a late-2011 MacBook Pro), I’m sort of tempted to replace it with a shinier new one.  My new employer is of the “bring your own device” variety, and the MacBook Pro is quite a lot of weight to schlep to and from the office every day.


Which Programming Language(s) Should I Learn?

I had an interesting conversation with a friend of mine (a computer science professor) and one of his students last week.  Beyond the basic which language(s) question were a couple more intriguing ones:

  1. If you had to do it all over again, would you still stick with the Microsoft platform for your entire development career?
  2. Will Microsoft be relevant in another ten years?
The first question I hadn't really contemplated in quite some time.  I distinctly recall a moment when there was a choice between two projects at the place where I was working--one project was a Microsoft project (probably ASP, VB6 and SQL Server) and the other one wasn't (probably Java).  I chose the former because I'd had prior experience with all three of the technologies on the Microsoft platform and none with the others.  I probably wanted an early win at the company and picking familiar technology was the quickest way to accomplish that.  A couple of years later (in 2001), I was at another company and took them up on an opportunity to learn about .NET (which at the time was still in beta) from the people at DevelopMentor.  It only took one presentation by Don Box to convince me that .NET (and C#) were the way to go.  While it would be two more years before I wrote and deployed a working C# application to production, I've been writing production applications (console apps, web forms, ASP.NET MVC) in C# from then to now.  While it's difficult to know for sure how that other project (or my career) would have turned out had I gone the Java route instead of the Microsoft route, I suspect the Java route would have been better.

One thing that seemed apparent even in 1999 was that Java developers (the good ones anyway) had a great grasp of object-oriented design (the principles Michael Feathers would apply the acronym SOLID to).  In addition, quite a number of open source and commercial software products were being built in Java.  The same could not be said of C# until much later.

To the question of whether Microsoft will still be relevant in another ten years, I believe the answer is yes.  With Satya Nadella at the helm, Microsoft seems to be doubling-down on their efforts to maintain and expand their foothold in the enterprise space.  There are still tons of business of various sizes (not to mention state governments and the federal government) that view Microsoft as a familiar and safe choice both for COTS solutions and custom solutions.  So I expect it to remain possible to have a long and productive career writing software with the Microsoft platform and tools.

As more and more software is written for the web (and mobile browsers), whatever “primary” language a developer chooses (whether Java, C#, or something else altogether), they would be wise to learn JavaScript in significant depth.  One of the trends I noticed over the past couple of years of regularly attending .NET user groups, fewer and fewer of the talks had much to do with the intricacies and syntactic tricks of Microsoft-specific technologies like C# or LINQ.  There would be talks about Bootstrap, Knockout.js, node.js, Angular, and JavaScript.  Multiple presenters, including those who worked for Microsoft partners advocated quite effectively for us to learn these technologies in addition to what Microsoft put on the market in order to help us make the best, most flexible and responsive web applications we could.  Even if you’re writing applications in PHP or Python, JavaScript and JavaScript frameworks are becoming a more significant part of the web every day.

One other language worth knowing is SQL.  While NoSQL databases seem to have a lot of buzz these days, the reality is that there is tons of structured, relational data in companies and governments of every size.  There are tons of applications that still remain to be written (not to mention the ones in active use and maintenance) that expose and manipulate data stored in Microsoft (or Sybase) SQL Server, Oracle, MySQL, and Postgresql.  Many of the so-called business intelligence projects and products today have a SQL database as one of any number of data sources.

Perhaps the best advice about learning programming languages comes from The Pragmatic Programmer:

Learn at least one new language every year.
One of a number of useful things about a good computer science program is that after teaching you fundamentals, they push you to apply those fundamentals in multiple programming languages over the course of a semester or a year.  Finishing a computer science degree should not mean the end of striving to learn new languages.  They give us different tools for solving similar problems--and that ultimately helps make our code better, regardless of what language we're writing it in.

Reflection and Unit Testing

This post is prompted by a couple of things: (1) a limitation in the Moq mocking framework, (2) a look back at a unit test I wrote nearly 3 years ago when I first arrived at my current company. While you can use Moq to create an instance of a concrete class, you can’t set expectations on class members that aren’t virtual. In the case of one of our domain entities, this made it impossible to implement automated tests one of our business rules–at least not without creating real versions of multiple dependencies (and thereby creating an integration test). Or so I (incorrectly) thought.

Our solution architect sent me an unit test example that used reflection to set the non-virtual properties in question so they could be used for testing. While the approach is a bit clunky when compared to the capabilities provided by Moq, it works. Here’s some pseudo-code of an XUnit test that follows his model by using reflection to set a non-virtual property:

[Fact]
public override void RuleIsTriggered()
{
  var sde = new SomeDomainEntity(ClientId, null );
  SetWorkflowStatus(sde, SomeDomainEntityStatus.PendingFirstReview);

  var context = GetBusinessRuleContext(sde);
  Assert.True(RuleUnderTest.When(context.Object));
}

private void SetWorkflowStatus(SomeDomainEntity someDomainEntity, WorkflowStatus workflowStatus)
{
  var workflowStatusProperty = typeof(SomeDomainEntity).GetProperty("WorkflowStatus");
  workflowStatusProperty.SetValue(someDomainEntity, workflowStatus, null);
}

With the code above, if the business rule returned by RuleUnderTest looks at WorkflowStatus to determine whether or not the instance of SomeDomainEntity is valid, the value set via reflection will be what is returned. As an aside, the “context” returned from GetBusinessRuleContext is a mock configured to return sde if the business rule looks for it as part of its execution.

After seeing the previous unit test example (and a failing unit test on another branch of code), I was reminded of a unit test I wrote back in 2012 when I was getting up to speed with a new system. Based on the information I was given at the time, our value objects all needed to implement the IEquatable interface. Since we identified value objects with IValueObject (which was just a marker interface), using reflection and a bit of LINQ resulted in a test that would fail if any types implementing IValueObject did not also implement IEquatable. The test class is available here. If you need similar functionality for your own purposes, changing the types reflected on is quite a simple matter.


Pseudo-random Sampling and .NET

One of the requirements I received for my current application was to select five percent of entities generated by another process for further review by an actual person. The requirement wasn’t quite a request for a simple random sample (since the process generates entities one at a time instead of in batches), so the code I had to write needed to give each entity generated a five percent chance of being selected for further review.  In .NET, anything involving percentage chances means using the Random class in some way.  Because the class doesn’t generate truly random numbers (it generates pseudo-random numbers), additional work is needed to make the outcomes more random.

The first part of my approach to making the outcomes more random was to simplify the five percent aspect of the requirement to a yes or no decision, where “yes” meant treat the entity normally and “no” meant select the entity for further review.  I modeled this as a collection of 100 boolean values with 95 true and five false.  I ended up using a for-loop to populate the boolean list with 95 true values.  Another option I considered was using Enumerable.Repeat (described in great detail in this post), but apparently that operation is quite a bit slower.  I could have used Enumerable.Range instead, and may investigate the possibility later to see what advantages or disadvantages there are in performance and code clarity.

Having created the list of decisions, I needed to randomize their order.  To accomplish this, I used LINQ to sort the list by the value of newly-generated GUIDs: decisions.OrderBy(d => Guid.NewGuid()) //decisions is a list of bool

With a randomly-ordered list of decisions, the final step was to select a decision from a random location in the list.  For that, I turned to a Jon Skeet post that provided a provided a helper class (see the end of that post) for retrieving a thread-safe instance of Random to use for generating a pseudo-random value within the range of possible decisions.  The resulting code is as follows: return decisions.OrderBy(d => Guid.NewGuid()).ToArray()[RandomProvider.GetThreadRandom().Next(100)]; //decisions is a list of bool

I used LINQPad to test my code and over multiple executions, I got between 3 and 6 “no” results.


RadioButtonListFor and jQuery

One requirement I received for a recent ASP.NET MVC form implementation was that particular radio buttons be checked on the basis of other radio buttons being checked. Because it’s a relatively simple form, I opted to fulfill the requirement with just jQuery instead of adding knockout.js as a dependency.

Our HTML helper for radio button lists is not much different than this one.  So the first task was to identify whether or not the radio button checked was the one that should trigger another action.  As has always been the case when grouping radio buttons in HTML, each radio button in the group shares the same name and differs by id and value.  The HTML looks kind of like this: @Html.RadioButtonListFor(m => m.Choice.Id, Model.Choice.Id, Model.ChoiceListItems)

where ChoiceListItems is a list of System.Web.Mvc.SelectListItem and the ids are strings.  The jQuery to see if a radio button in the group has been checked looks like this: $(“input[name=‘Choice.Id’]").change(function(){ … }

Having determined that a radio button in the group has been checked, we must be more specific and see if the checked radio button is the one that should trigger additional action. To accomplish this, the code snippet above is changed to the following: $(“input[name=‘Choice.Id’]").change(function(){ if($(“input[name=‘Choice.Id’]:checked”).val() == ‘@Model.SpecialChoiceId’){ … } }

The SpecialChoiceId value is retrieved from the database. It’s one of the values used when building the ChoiceListItems collection mentioned earlier (so we know a match is possible). Now the only task that remains is to check the appropriate radio button in the second grouping. I used jQuery’s multiple attribute selector for this task.  Here’s the code: $(“input[name=‘Choice.Id’]").change(function(){  if($(“input[name=‘Choice.Id’]:checked”).val() == ‘@Model.SpecialChoiceId’){   $(“input[name=‘Choice2.Id’][value='@Model.Choice2TriggerId']").prop(‘checked’,true);  } }

The first attribute filter selects the second radio button group, the second attribute filter selects the specific radio button, and prop(‘checked’,true) adds the ‘checked’ attribute to the HTML. Like SpecialChoiceId, Choice2TriggerId is retrieved from the database (RavenDB in our specific case).