A number of bugs in recent Blackboard updates seem to be the kind that should have been picked up in the development and testing process. For example, the current bug where clicking user attempts does not display the correct student or the recent error where creating or editing a tool link removed the description are errors that would not necessarily create error log entries or be recognized by the system as problems, whereas a user would immediately identify them.
This suggests that Blackboard's testing regimen lacks a sufficient real world element, where human users are running common processes and activities in order to determine where problems occur.
Admittedly, it is not easy to spot every single problem that may occur. Indeed, in both of the examples above the errors were brought to our attention by faculty users, attempting to use those tools. On the other hand, most of us SaaS users in the academic support business neither have the time or personnel to engage in this kind of detailed testing with as many updates as there are. In addition, the only way for us to spot those errors is in the 10-14 day window in which the updates are in test systems. Even if we spot the bugs in that environment and report them, it is highly unlikely Bb would have patches completed in time to correct the issue before it reaches production systems.
So, Blackboard needs to add testing that involves human users running processes and using tools in a systematic way to identify errors. If this is already being done, the process could clearly stand to be improved. Based on the communications with Support, these are not issue of which Blackboard is already aware. So, improving the process by which they can be identified and fixed in advance would be an important improvement to Blackboard's process.
|Product Version (if applicable):||0|