Skip navigation
All Places > System Administrator Community > Blog > 2018 > September
2018

The top findings from the Patterns in Faculty Learning Management System Use | SpringerLink research study at University of Illinois at Chicago.Screen Shot 2018-09-23 at 3.04.16 PM.png

 

1. On-line courses at UIC focus on holistic use of LMS tools.  68.3% of hybrid/elearning courses, as opposed to in-person courses, were in this latent analysis group and used five key tools: content items, grade center, announcements, discussions, and digital assessments.  Only 22.5% of in-person courses were found in this group.  3% of hybrid/elearning courses were in content repository group.

 

2. 53.7% of in-person courses at UIC were in complimentary usage group.  This means they used three main tools: grade center, announcements, and assignments.   In-person courses split 22.5% as holistic and 23.7% as content repository courses.  Content repository courses used content items and announcements (without the use of the grade center, assignments, or digital assessments).

 

3. Holistic group of courses had courses of larger class sizes and greater likelihood of on-line delivery.

 

4. Comparing the student use of time in digital content of courses and faculty design intentions, there is clearly a gap.  Perhaps time spent on course items by students reflects their best judgment on what will make them successful in the course. Faculty may be designing opportunities for students, which are not well communicated and utilized. Further research is needed to bridge this gap and match student digital behavior with faculty expectations and their design for learning.

 

5. The aggregate profiles of courses by school or college often reflect a general nature of the programs and curricular approach.  The adoption of specific tools in the digital portion of a course should not be correlated to academic quality of the program or effectiveness of instruction.  This approach reports only on the selection of tools in Blackboard Learn portion of the course design. However, this presentation of the results may suggest resources that may be needed by specific colleges, such as assigned instructional designers or instructor training sessions in specific Blackboard tool use. Finally, as the body of knowledge about the Scholarship of Teaching and Learning (SoTL) continues to increase, exploring the tool use in the local learning management system may help to distribute SoTL findings to instructors and colleges according to the digital evidence of their course design (Englund, Olofsson, & Price, 2017; Openo et al, 2017).

 

6. The findings in this study need to be related to the patterns, which were identified previously by measuring the student use of courses across a large data set (Whitmer et al., 2016). That study did not report on the faculty intent, in terms of the design of the course, but on the time students spent consuming content. The present study adds that while students may be spending a large amount of time in the content of the course, at least 28% of courses at UIC were created with well-rounded opportunities for students to engage in assessments, discussions, announcements, assignments, and reviewing grade postings.  This affects approximately 31,417 student course enrollments (706 courses with 44.5 enrollments on average out of the total of 2,562 courses with an average of 38.4 student course enrollments). The definition of the Complementary group in the study by Whitmer (2016) included content with announcements and the use of gradebook.  Our study identifies a different course design profile as Complementary tool use.  It includes digital assignments for roughly half of the courses.  Perhaps the time required to complete the assignments cannot be recorded in the student activity data; however, there is a clear intention on the side of faculty for students to submit their assignments through the LMS.  Along with the Holistic tool use profile of courses, they make up 79% of courses at UIC.

 

7. The use of the Blackboard Learn system as a “Content repository” makes up only 21% of the system, 3% of hybrid/elearning courses.  This latent class profile, content repository, may represent initial phases of a faculty member digitizing a course experience.  It may represent the view of the role of technology in teaching as faculty-to-student communication and content-to-student communication.  Certainly, this intent by faculty does not tap into student-to-student communication, digital collaboration between either faculty and students (assignments) or student-to-student collaboration (groups, discussions), or digital assessment (quizzes or exams) in the  system.  It may be that these teaching and learning dimensions are facilitated in the classroom or in other systems.

 

Screen Shot 2018-09-08 at 5.12.23 PM.png

Screen Shot 2018-09-08 at 5.12.11 PM.png

 

 

More:

Patterns in Faculty Learning Management System Use

ResearchGate: Patterns in Faculty Learning Management System Use

Springer Nature Reader

 

References:

Machajewski, S., Steffen, A., Romero Fuerte, E., & Rivera, E. (2018). Patterns in Faculty Learning Management System Use. TechTrends. http://doi.org/10.1007/s11528-018-0327-0

Last week we had an email avalanche when a teacher sent an email by a personal learning desiger rule.
The idea was to send a personalized email to every student.
The rule was simple: send an email at a specific date and time.
The email contained the token ((student_firstname)) and was sent to all users in the course with the role student.

This apparently created 400 emails, one for every student, that were all sent to every student, making 160000 emails. Expected behaviour was: send one email to every student.

We now realize that he should probably have sent the email to 'Triggering user', but it's not evidently clear that including the token is actually a trigger. Or is it? Maybe you can clarify this for us.

It would be nice to have a preview feature for PLD, maybe even a warning when a user tries does something that would trigger so many emails: "You will be sending 160000 emails! Do you really want to do that?"

(this post is a continuation update from my prior post in July 2018)

 

9/28 Edit: updated expected CU delivery dates

9/14 Edit: updated Excel file annotations info

10/16 Edit: updated delivered CU dates and links

 

It's been ~six weeks since my last update and there has been some new developments on multiple fronts that I thought I'd share:

 

Excel-based File Annotations

Box has informed Blackboard that today, September 13th, they will be enabling annotations capabilities for Excel-based file formats (.xls, .xlsx, etc.).  Once available, this functionality will represent another improvement to New Box View and another gap closed against the former Crocodoc solution.

 

The exact time this functionality will be available is not known, but I will keep checking and update this post once confirmed!

Excel files can now be annotated!  Go try it yourself!

 

Screen Shot 2018-09-14 at 4.20.35 PM.png

 

Download with Annotations in Learn Original Bug Fix Availability

Since the release of the Download with Annotations capability, a bug was identified that prevents the download of annotated documents when using the 'My Grades' workflow.  We've identified the underlying issue and have a fix ready, but the fix must be applied to Learn instances (not within a microservice).  The next round of planned Cumulative Updates for Learn will provide this fix, and are currently targeted for the following dates:

 

 

These dates are still targets and could move, but this is our expected timing based on what we know right now.  I'll update this post as the CUs become available.

 

In the interim, the workaround for this issue is to use the Download with Annotations capability by accessing the attempt through the Content -> Assignment -> Attempt workflow.

 

Accessibility Update

Box has made internal progress on improve the accessibility of New Box View: the main content and navigation UI has been accessible for quite a while, and the annotations have been the next priority and progress has been made.  However, the improvements will not be available until the general annotations features make their way into the main Box content management application, and I don't yet have a timeframe from Box on that action.  However given the development cadence and agile methodology that Box follows, movement and communication from Box means that we're getting closer!  As soon as I have more information, I'll share it via an update to this post or a new blog post.

 

I've also updated the issue tracking spreadsheet this month as well.

I was going to start a discussion and solicit feedback, then I realized I could just as easily create a blog post and have the discussion in the comments section.   So here we are.

 

What's this about?

 

Part of staying on top of my work is keeping up with Known Issues in Blackboard.  A lot of times, I don't have time to log in to BtBb and look things up, so what I'll do is keep a copy of the Known Issues spreadsheet on my workstation for quick reference. This blog post explains the hows and the whys, and also goes into what I do with the data once I've got it.

 

Getting the Data from BtBb

 

Log in to Behind the Blackboard and click on the link to Known Issues.btbb_platform_select.png

Once the Known Issues page is loaded, you'll want to narrow down the results a bit.  To do this, click the "Release" dropdown and select your environment.

 

NOTE:  There are only two options available, 9.1 and SaaS. If you don't know which of these is applicable to your institution, you probably shouldn't be logging into BtBb in the first place.

 

So as the image to the right will attest, I've selected my release (SaaS), and my Product (Learn) and Article Type (Known Issues) were selected by default when I clicked on the "Known Issues" link from the BtBb main page.

 

 

export_to_excel.pngSo once I've selected my Release, I click on  "Export to Excel" to download the data.  (The "Export to Excel" icon is on the upper region of the page, on the right-hand side, hidden in plain sight, as it were.)

 

Your browser will download the file, and depending on your browser and system configuration, you may or may not be prompted to do something. My machine is so accustomed to me doing this that it doesn't even bother me when I go to download pretty much anything from Bb.

 

We've got data, 'cause we've got a band.1

 

Now that the XLS file has been downloaded to your workstation, you're ready to get to work and do all sorts of magical things.   So locate that file on your machine and open it in Excel.

 

excel_btbb_error.pngWhen you try to open the file, Excel will return an error, indicating that the file format and extension don't match and asks if you want to open it anyway.  Click "yes" to ignore the error and open the file.

 

 

When you open the file (against Excel's better judgement) something weird happens. 
There's data that resembles a spreadsheet but what in tarnation is all that gobbledegook in the very first cell?btbb_export_01.png

 

That, my friends, is something they didn't teach you at Monument University.

When you download the .XLS file, there's a surprise hidden inside.

 

The spreadsheet from BtBb is NOT an XLS file at all, but rather an HTML document with a .XLS extension.  Excel can open it and present the data because it’s HTML and it’s laid out in tables, so Excel says “oh.  Ok.  I can figure this out.” and it goes from there.  But because it’s not “real” Excel data, it’s hard to work with (and that’s also why it’s always got that big ugly line of code in the top row).

 

To see for yourself, download the file from BtBb, change the extension to .HTM (or .HTML), then open the renamed file in your favorite text editor or browser.   It will look okay, but it’s still not quite right.  If you copy and paste that text into Excel, you’ll have a lot of blank rows and your sorts will be wonky as a result.

 

The next section shows here’s how to fix that. 

 

Making Sense of the Weird Exported Data

 

Open the HTML document in a text editor (There are tons of them out there.  I'm a fan of Atom because it's cross-platform.  But I confess, if I'm on a PC, I'm partial to Notepad++).

 

PART ONE - CLEAR YOUR HEAD

 

Once you've opened the document, delete everything before the <table> tag.  There's a lot.  Most of it is in the <script>, which is actually the only thing you need to get rid of, but it's simpler just to get rid of everything rather than look for the start and end of the <script>, especially if you're not used to working with HTML code in a text editor.

btbb_delete_header.png

 

Now we are left with a document with no HTML heading information.  This would be a terrible practice for actual HTML coding, but we're not doing actual HTML coding for publication, we're just cleaning up a mess.  Your browser will still read it without issue (browsers are smart like that).

 

PART TWO - ELIMINATE THE EMPTY PARAGRAPHS

If you saved the file at this point and opened it with your browser (or with Excel), you'd see that the garbage at the top was gone.   But we want to do something else before we save it.   What you can't see in the browser is that there are several <p> elements in the code that would cause Excel to render the data as a blank row.  This isn't cool.   So you want to eradicate all of these. 

 

So...  do a find/replace for <p>, replacing each instance of <p> with a delimiter of your choice (I like ; in this instance, but that’s just me). 

btbb_remove_p_tags.png

Save the file.

 

PART THREE - GETTING USABLE DATA INTO EXCEL

 

We're not quite done.  Now that you've saved the HTML file, open it in your web browser of choice.  Magically, all of the extraneous rows are now gone.  Now, copy all of that data in the table (the one in your browser) into a blank Excel document.  Depending on your computer, it may take a bit, as there is a lot of data there.

 

However, if you check out the Excel spreadsheet, you'll notice that the data is clean and much easier to work with.

 

So save the Excel file, and you now have a copy of the Known Issues saved locally.  YAY!!

 

Wrapping it up

 

This isn't the end, but it gives you something to work with.  The real tricky part comes when you want to make those dates something you can actually use, because at the moment, Excel is treating that data as generic text because it doesn't know any better.

 

If there's interest, I'll write a follow-up piece to this about how to clean up the date data so that it actually means something.

 

I will confess, this is a clunky, labor-intensive method, and I’ll come up with a more elegant method eventually, but for the moment, it gets the job done.

 

 

 

 

 

 

 


 

1Okay, it should be noted that while I'm writing this, I'm watching a live Phish concert (on the web, not in person), hence the oddball headers and occasionally goofy tone of the article).