comment 0

The Disconnect in Digital Communications

There is a significant disconnect in digital communications between how people receive messages and how they generate them.

What do people value when they send messages?
• a minimum of time invested by them
• fast replies
• useful answers that require little time to interpret
• that action occurs as quickly as the communication does

What do people value when they receive messages?
• clarity as to what they need to do with the information
• value, not the 60-80% of messages that get deleted as not useful or outright spam
• completeness, where all information they need to take action or make a decision is included in the message
• brevity, so that they don’t waste their time with that message; this is where the comment TLDR came from

How can we balance this divide in digital communications?
• Only include decision makers in requests for action.
• Provide all information someone needs to take action, even if that is identifying the person who can provide more info if required.
• Set up tiered information distribution. Only send the message to the first three people who can act in an urgent situation, moving to the next ten candidates to act as time goes by.
• Ensure that critical information is in commonly accessible locations instead of relying on messages to notify or inform people.
• Only send out status messages, meeting summaries and reports to those who care.
• Give people the option to opt out of automated notifications.
• Only send alerts to those who can act upon them or, as managers, need to know.
• Include instructions on how to escalate a problem or get help at the bottom of the message, like info on how to contact the help desk if the reset password doesn’t work.
• Train people how to get information they need so they don’t have to send five emails to as many people to try to find something that should be in a common data repository.
• Ensure that your automated message generation doesn’t turn into spam, such as notifying someone on vacation that an action needs to be taken before sending escalating warnings to them and a supervisor.
• Where possible, allow people to digitally manage delegation of tasks so that those not available don’t receive 20 notices that the delegate is working on.
• Create a central list of subject matter experts, so that no one has to ask, “Who can troubleshoot this?”

comment 0

The Unusual Exception to Pareto’s Law in IT, Updated

Pareto’s Law is based on the observation by an Italian economist, Vilfredo Pareto, who found that 80% of the wealth and property was held by 20% of the people, the same concept has held true in many other areas. For example, 80% of your user tickets will come from 20% or so of the same root causes like simplified sign on problems, a specific installation process or common issues like forgotten passwords. When determining areas for improvement, unless risk management or contractual requirements demand it, industrial engineers should focus on the 20% of the problems and their root causes to have the greatest impact on an organization.

A 2014 study by the Nielsen Normal Group found an exception to Pareto’s law on the internet in the form of user participation. About 90% of all visitors to a website like a social networking site are lurkers, people who read but never post. Nine percent are users who contribute rarely, often only a single comment or review. Over a quarter of all content, from online comments to consumer reviews, comes from only one percent of the community. This is a far greater concentration of power or prominence online than the democratizing effects of the internet should see. Blogging is somewhere in the middle. Of the estimated 1.1 billion internet users, there are about 55 million blogs. So personally managed content is equivalent to 5% of the user community.

Why do you want more equal distribution of online content creation and feedback? Why does the unusual exception to Pareto’s Law in IT matter?

Collaboration is Only as Good as the Collaborators

Collaboration can be described as the combining of ideas from many different sources to make a whole that is better than the work of a single person. There are various studies on the wisdom of the crowd, found to be accurate in everything from unusual medical diagnoses to guessing the number of candy in a jar. However, low rates of participation mean that there are very few people in the problem-solving crowd, despite the large number of visitors to a site.

The idea of collaboration is great until you realize the team contributing may be as small as the in house talent you could have brought together for a brainstorming session. And many of the great sources of ideas the crowd-sourcing collaboration was intended to tap do nothing but read the results and rarely respond. If the top 1% of contributors are not subject matter experts or experienced in the area, the most valuable talent that a crowd-sourced project hoped to find isn’t the group participating in the project.

Excessive Weight Given to the 1%

There are often complaints about how much wealth and influence the top 1% have in society. Setting aside economics and politics, let’s look at the impact online. Online marketing relies on hitting that top 1% to generate most of the word of mouth advertising we see online. Upsetting or annoying ninety users has little impact until that one-percenter is annoyed, and then broadcasts to the world how lousy your product or service is. Conversely, the nine content rare contributors are glossed over by the one-percenter who has a disproportionate voice due to far greater involvement online. While politics seeps into everything from Wikipedia articles edited to remove all mention of the Medieval Warm Period by global warming advocates to bolster their claims that it has never been hotter to a one person campaign to promote a particular website who does better than some companies due to sheer persistence and being the first to comment on all related articles, the top one percent of contributors have a greater web presence and influence than the other 99%.

The Nielsen Normal Group 2014 report also describes how private arguments shut down entire discussion forums because of their volume. Two one-percenters in a public argument on the merits of Unix versus Windows or the interpretation of a technical standard can crowd out the few other contributors on the site or even prevent the infrequent contributors from risking getting burned by adding a single comment. SJW do this almost everywhere online, breaking up discussions on technical matters with social issue debates. The end result is the same. All of the content comes from the top one to two percent. The creator may defend his or her points of view in the forum, or their greatest advocates argue with others over those points. One person’s insightful analysis or simple observation is lost, because the two impassioned arguing parties drown it out. And the intent of online collaboration and technical review is lost.

Search Results and Rankings

Even lurkers search the web, so that isn’t the adverse impact of the top 1% of online content generators. Their impact is in back-linking and link promotion, affecting search engine rankings for those sites. A small group of motivated individuals has as great an impact as a professional marketing campaign. For more mundane projects, the online ratings of other employees or voting to rank content they create impacts the search results of queries on internal corporate intranets.
One exception to the Pareto rule is one that Nielsen didn’t bring up. Many networks bias toward specific content creators. This may be a system update from the administrative group, a political elite person given a top spot on a social network because of their political connections or featured content by paid providers. These people have the limelight because of the power they wield or influence they already have. This shifts search results and website rankings to their sites in corporate intranet searches and sites like Facebook and Linkedin.
The result is that inane posts by someone with currently high status is deemed more important than deep and insightful content from an average user. The content by the top few percent thus remains in the top few percentage points because of its higher distribution level and greater priority, depressing the content sharing of those who don’t have premium status.

comment 0

What’s Your Comfort Level?

When scheduling work loads and projects, there are several reasons to ask your team what tasks and projects they are comfortable taking on.

• Managers may plan a work load based on prior experience, biased toward the historic average, while team members are looking at the complexity and problems the project will have. They know their own slower learning curve to take up tasks for this project, where this new project isn’t exactly like the old one.
• Many managers plan based on a “happy path” where all testing goes well, nothing has to be reworked, new problems aren’t discovered and no one overlooked a critical interface to be tested. While managers plan on a ten day project assuming only one day of slippage, those in the trenches are more likely to say we need two extra days for thirteen total to make sure it is right. Ask your team what schedule they are comfortable with instead of setting a schedule that suits your calendar and arbitrary deadlines.
• Managers may not realize the technical debt staff needs to do in addition to the new tasks the manager wants to assign. The technical debt is an additional burden that leaves less time available for new projects.
• Managers may not realize the tasking inefficiencies from things like communication delays, training those you delegate to and reliance on subject matter experts who need to be onboarded.
• Managers have a tendency to expect overtime to meet desired completion dates, regardless of the risk of burning out team members or prior commitments of team members.
• Managers that over-plan to the point that people track time cards to the tenths of an hour, submit long daily reports typed by hand and other time intensive tasks end up taking time away from the actual work to report the status of their work. I’ve seen this in status meetings where senior engineers remarked, “I can get ready for the status meeting or finish the task you want done by the end of the day – pick one.” Managers need to ask at what point oversight becomes a source of inefficiency, hindering actual work, though most aren’t comfortable doing this.

Software Verification and Software Validation – What’s the Difference?

What is Software Validation?

Validation can be described as verifying that you are building what the customer wants. Software validation is the process of ensuring that your application meets functional and non-functional requirements before coding and during development. In general, validation is verifying that the customer wants a bike before you put together a car. In software, validation is verifying that the customer wants accounting software before building a data management tool. In short, validation is finding out what the customer wants before you start coding. Software validation takes place primarily during the system requirements analysis and design phase. Software validation should be done before you start writing code.

What is Software Verification?

Verification if verifying that the product is being assembled correctly, after you have determined what you are supposed to build. You are building a bike. The verification is checking that you have all of the parts of the correct sizes to build it, from the right length frame to the correctly sized seat. Software verification of an accounting tool would involve ensuring that the accounting register and check generation works per customer requirements before considering the coding complete. Software verification takes place during the implementation, integration and testing phases. Software verification should be completed before software release.

Here’s The Difference

Software validation is determining whether or not an interface with a website or another software application is necessary. Software verification is ensuring that the interface works appropriately after it is “built”. When software requirements creep or expand beyond their original scope, a new requirement for software interface is added. After the new software interface is built, it must be verified to ensure that information flows between the two applications without errors.

Common Causes of Confusion

Scope creep of a project, such as the addition of new requirements after the scope was decided, essentially moves the software validation phase into the software verification phase. Adding new software requirements adds cost and time to the software development process.

How to Avoid the Confusion
1. Include all stakeholders in the requirements definition phase.
2. Have a strict definition of the requirements definition phase. Refuse to let new requirements creep in once coding begins, even if someone promises that it is a minor change.
3. Determine where the data must come from and go as part of the typical use of the tool. If the data interface is frequently used, include the ability to transfer the data between those applications or databases a requirement and include the testing of these data transfers in software validation.
4. Focus software validation efforts on core functions over optional enhancements if crunched for time. Make sure the critical functions work, over the new features you can decide not to go forward with if you don’t have enough time to test it.
5. Recognize that functional applications are a requirement, not an enhancement. Prioritize testing of functional requirements over cosmetic ones; a report with awkward to read headers is an annoyance, but data imports and accurate reports are essential.