As a graduate student, Steven Weisberg helped to develop a university campus — albeit, a virtual one. Called Virtual Silcton, the software tests spatial navigation skills, teaching people the layout of a virtual campus and then challenging them to point in the direction of specific landmarks1. It has been used by more than a dozen laboratories, says Weisberg, who is now a cognitive neuroscientist at the University of Florida in Gainesville.

But in February 2020, a colleague who was testing the software identified a problem: it couldn’t compute your direction accurately if you were pointing more than 90 degrees from the site. “The first thing I thought was, ‘oh, that’s weird’,” Weisberg recalls. But it was true: his software was generating errors that could alter its calculations and conclusions.

“We have to retract everything,” he thought.

When it comes to software, bugs are inevitable — especially in academia, where code tends to be written by graduate students and postdocs who were never trained in software development. But simple strategies can minimize the likelihood of a bug, and ease the process of recovering from them.

Avoidance

Julia Strand, a psychologist at Carleton College in Northfield, Minnesota, investigates strategies to help people to engage in conversation in, for example, a noisy, crowded restaurant. In 2018, she reported that a visual cue, such as a blinking dot on a computer screen that coincided with speech, reduced the cognitive effort required to understand what was being said2. That suggested that a simple smartphone app could reduce the mental fatigue that sometimes arises in such situations.

But it wasn’t true. Strand had inadvertently programmed the testing software to start timing one condition earlier than the other, which, as she wrote in 2020, “is akin to starting a stopwatch before a runner gets to the line”.

“I felt physically ill,” she wrote — the mistake could have negatively affected her students, her collaborators, her funding and her job. It didn’t — she corrected her article, kept her grants and received tenure. But to help others avoid a similar experience, she has created a teaching resource called Error Tight3.

Error Tight provides practical tips that echo computational reproducibility checklists, such as; use version control; document code and workflows; and adopt standardized file naming and organizational strategies.

Its other recommendations are more philosophical. An ‘error tight’ laboratory, Strand says, recognizes that even careful researchers make mistakes. As a result, her team adopted a strategy that is common in professional software development: code review. The team proactively looks for bugs by having two people review their work, rather than assuming those bugs don’t exist.

Joana Grave, a psychology PhD student at the University of Aveiro, Portugal, also uses code review. In 2021, Grave retracted a study when she discovered that the tests she had programmed had been miscoded to show the wrong images. Now, experienced programmers on the team double-check her work, she says, and Grave repeats coding tasks to ensure she gets the same answer.

Scientific software can be difficult to review, warns C. Titus Brown, a bioinformatician at the University of California, Davis. “If we’re operating at the ragged edge of novelty, there may only be one person that understands the code, and it may take a lot of time for another person to understand it. And even then, they may not be asking the right questions.”

Weisberg shared other helpful practices in a Twitter thread about …….

Source: https://www.nature.com/articles/d41586-022-00217-0

Leave a comment

Your email address will not be published. Required fields are marked *