In the summer we will be moving from managed hosting with ASR to SaaS and the DDA.
Can anyone offer me any guidance on using DDA - what client are you using to run queries, any help guides etc?
We are also looking at automating some queries, has anyone else done this with DDA?
All the solutions I've found so far require some write privileges which I obviously don't have!
I am (was) an OpenDB user and started using DDA when we moved to SaaS. I am still trying to figure out the best client configuration to use.
With OpenDB, I was using a combination of Oracle SQL Developer, SQLPlus, and Tableau to access the tables.
At the moment, I can issue queries using Oracle SQL Developer, PGAdmin, Talend, and Tableau, however, I have encountered several challenges:
- Due to the way that DDA is implemented with PostgresSQL, queries which used to work in my original environment typically time out. Temporary table space isn't supported by the architecture.
- PostgresSQL syntax is different, particularly with date datatypes
To get around the performance issues, I've been trying to create local replicas of the tables but at the moment, I haven't been successful at moving data I want from the DDA tables to local copies.
Fortunately, I was able to get Tableau to work - but I am still working to create a activity_accumulator file that covers the duration of a term.
- DDA can access to a delayed copy of the live tables. However, again, many queries against these tables fail if they take too long and/or bring back too much information.
- DDA includes a set of stats tables which are taken shortly after midnight. This one is a little more forgiving, but I also encounter query failures.
- SAAS tables are slightly different. For example, the activity_accumulator messages field was moved.
- If using SQL Developer, you need to manually edit the connections.XML file (C:\Users\sor\AppData\Roaming\SQL Developer\system184.108.40.206.21\o.jdeveloper.db.connection.220.127.116.11.42.151001.541) to specify the database name. I am still trying to figure out if I can define a database link
- Postgresql database names are case sensitive
- SQL syntax is different from Oracle. One example: current_date vs sysdate() .
I would be very happy to collaborate.
Hi Roger. Would you be willing to post some examples of OpenDB and their DDA counterparts? Just looking for examples of the modifications you've had to do.
Washtenaw Community College moved to Bb SaaS Jan 02 2018 with DDA access...
i primarily use freeware postgres SQL client called PGADMIN (version 4.3) from my Win10 desktop for adhoc/one-off queries.
From Linux I use the "psql" command-line command to execute calls to the DDA from some scripts.
I have created several scripts using Python that make use of the "psycopg2'' library/module
My biggest adjustment was; when we were self hosted with an Oracle database I could access any of the various Db schemas from a single SQL query,
but in our postgres DDA we instead see five separate databases:
BB5a333e152baa4, BB5a333e152baa4_admin, BB5a333e152baa4_cms, BB5a333e152baa4_cms_doc, BB5a333e152baa4_stats
And I can no longer seem to create a query that "crosses/includes" data from two separate postgres databases, in the same way that I could fetch data from two different schemas in a single oracle PL/SQL Oracle query.
Re: cross-link databases: Check if dblink is installed as an extension on your DDA instance. It is installed on my base database, _admin, and _stats but not _cms or _cms_doc (I have a case open about this). This means I can currently link out from base, _admin, and _stats.
Here's an example: this query pulls a list of course banners which are not at institution level of the content collection and not in the courses/orgs' home directory (e.g. cross-listed banner images due to a Known Issue from course copying). Note the dblink function being joined to obtain Xythos data from _cms_doc:
SELECT COURSE_MAIN.COURSE_ID, COURSE_MAIN.PK1 AS COURSE_PK1, 'Banner' AS CNTHNDLR_HANDLE, COURSE_MAIN.BANNER_URL,
CASE WHEN SUBSTR(COURSE_MAIN.BANNER_URL,1,1) = '/' THEN REPLACE(REPLACE(COURSE_MAIN.BANNER_URL,'/xid-',''), '_1','') ELSE '0' END AS XID,
LEFT OUTER JOIN (
FROM dblink('dbname=[your_db_name]_cms_doc user=[your_username] password=[your_password]', 'SELECT XYF_FILES.ENTRY_ID, XYF_FILES.FILE_ID, XYF_FILES.LAST_UPDATE_DATE, XYF_URLS.FULL_PATH FROM XYF_FILES LEFT OUTER JOIN XYF_URLS ON XYF_FILES.FILE_ID = XYF_URLS.FILE_ID')
AS XYF_FILES(ENTRY_ID int, FILE_ID int, LAST_UPDATE_DATE timestamp, FULL_PATH text)
) AS XYF_FILES ON COURSE_MAIN.BANNER_URL = '/xid-'||CAST(XYF_FILES.ENTRY_ID AS text)||'_1'
WHERE BANNER_URL IS NOT NULL
AND CASE WHEN POSITION('courses/' in XYF_FILES.FULL_PATH) > 0 THEN SPLIT_PART(XYF_FILES.FULL_PATH, '/', 3) ELSE SPLIT_PART(XYF_FILES.FULL_PATH, '/', 2) END <> 'institution'
AND CASE WHEN POSITION('courses/' in XYF_FILES.FULL_PATH) > 0 THEN SPLIT_PART(XYF_FILES.FULL_PATH, '/', 3) ELSE SPLIT_PART(XYF_FILES.FULL_PATH, '/', 2) END <> COURSE_MAIN.COURSE_ID
AND CASE WHEN POSITION('orgs/' in XYF_FILES.FULL_PATH) > 0 THEN SPLIT_PART(XYF_FILES.FULL_PATH, '/', 3) ELSE SPLIT_PART(XYF_FILES.FULL_PATH, '/', 2) END <> COURSE_MAIN.COURSE_ID
ORDER BY FULL_PATH
It took me way too long to pick up on this extension, but it has proven to be very helpful when identifying cross-linked content. It does slow the query down compared to the old simple join, but even this query only takes about 3 minutes to run on our installation and we have more than 36,000 courses/orgs in the system.
I hope this helps!
This is a partial update:
1) So far, I've been able to use Tableau to produce the high level reporting I had been running before. I report on the number of daily and weekly active courses, students and faculty. I have a batch tableau process run a query against the stats tables and push the data up to Tableau online.
I encountered problems when I wanted to join this data with other Oracle data. I was never able to get Oracle to read the Postgresql tables or easily get a local Postgresql server to read DDA's tables. I tried to use Talend Community to copy the dda stats tables into a local flat file, but kept running out of heap space. So I recently started running Python scripts to move data around. Mostly from Postgresql to Oracle
- The DDA activity_accumulator table is slightly different than the Opendb Oracle table. I had to understand how to translate Postgresql data through Python into Oracle. Most of my confusion was dealing with Python 'None' and date functions.
I still might try to use local postgresql instead of a local oracle db. Oracle's current advantage is that our current admin extract tables are currently in Oracle, so it is one less piece of data that I have to move around since oracle support database links.
We use WhereScape RED for our data warehouse automation. I was able to create a System DSN to point to our DDA instance and then use WhereScape to query that and automate the nightly data loads. I realize that's probably not very helpful if you don't use WhereScape but if anyone does and would like tips I would be happy to help.
Retrieving data ...