Why do trustable assessment results matter, and how can an assessment management system help organizations improve the quality of their assessments? This webinar provides tips and introduces key processes and technologies that can be used to develop valid, reliable assessments that deliver trustable results.
This webinar demonstrates the "next generation" authoring tool available in Questionmark OnDemand. The session will show the basics authoring and managing items, as well as organizing items into assessments.
Did they learn it? Can they do it? Can you prove it?
By answering these questions, assessments play a key role in mitigating risk and demonstrating regulatory compliance.
This webinar will examines real-world examples of how assessments are used to strengthen compliance programs and provide tips for developing valid, reliable assessments that deliver defensible, trustworthy results
Globalization and innovative web technologies have removed geographic boundaries and opened new opportunities for awarding bodies, technology certification programs and credentialing organizations. However, with these opportunities come new challenges: How do you localize items and assessments so they will reliably measure knowledge, skills and abilities across multiple languages and cultures?
Presented by Steve Dept, CEO of cApStAn, leading provider of linguistic quality assurance services, and John Kleeman, Executive Director and Founder of Questionmark, this webinar will help you understand good practice so that your tests measure the same thing in other languages and cultures. We will explore both strategic and procedural issues that must be considered in undertaking an assessment localization project and offer some tips and strategies for anticipating and overcoming common pitfalls. Finally, the session will provide a brief overview of some of the technologies and services available to manage the translation and test items and assessment.
James R. Parry, Test Development Manager, US Coast Guard Training Center Yorktown, VA, explains how to use the Angoff method for setting cut scores, plus a simple spreadsheet and metatags, to ensure all participants receive an assessment of equal difficulty.
Video of a webinar presented by a Questionmark customer describing how to determine a cut score using the Angoff method.
In this video we’ll show you just how easy it is to access Questionmark OnDemand from SuccessFactors with SAML-based single-sign-on (SSO) enabled.
This video explains how to integrate Questionmark and Moodle.
This video explains how to integrate Questionmark and Cornerstone OnDemand
This video explains how to integrate Questionmark and SuccessFactors Learning.
Learn more about Publish to LMS and how it enables you to launch-and-track Questionmark assessments from within a learning management system.
This video explains how to integrate Questionmark and Canvas.
Getting Started with Results API - Creating dashboards views with Microsoft Power BI
This video provides a high-level explanation of the OData protocol and shows how Questionmark's Results API for Analytics (which use OData) can be used to easily access assessment results data. It also explains how common business intelligence tools such as Excel, Tableau, and SAP BusinessObjects can use the Results API as a data source for creating custom reports and dashboards. Learn more about analysis and reporting of assessment results in Questionmark.
How to use SAP Business Objects to access your organization's Questionmark assessment results using odata feeds via Questionmark's Results API.
In this video, you'll learn how to use Questionmark's Results API for Analytics as an OData data source for the SAP BusinessObjects Business Intelligence Suite. Once BusinessObjects is connected to your assessment results data through the Results API, you can easily create custom reports and dashboards.
Users Conference Webcast Recordings
Questionmark CEO Eric Shepherd will moderate this session and call on members of our product management teams to present and demonstrate the latest releases and new features in assessment authoring, delivery, and reporting.
You got your test scores, but what do they mean? Assessments include an evaluative component for interpreting scores in relation to the domain or measured construct, and test users require documentation on how to make those interpretations. Defining performance level descriptors and setting meaningful cut scores help meet these requirements. In this session, we will discuss interpretation differences between norm-referenced and criterion-referenced assessments. We will review the role of performance levels in score interpretation, and we will introduce several common methods for setting performance standards that are used to link scores to interpretations. We will illustrate concepts with examples and activities.
Presenters: Austin Fossey, Reporting & Analytics Manager, Questionmark
We have all been exposed to some sort of assessment in our lives. Think back to your school days – you would sit in class and then get a test on what you were supposed to know. Most times the test was just a regurgitation of rote memory. If you paid attention and read the assignments you probably would pass. How long did you remember the material after the assessment? You probably forgot most of it by the end of the day so you could cram more in your head for the next test. Did your score really demonstrate what you could actually do? All it did was test your knowledge and how well you could memorize.
Writing test items at higher cognitive levels is a challenge that most test writers are apprehensive about tackling. This session will explain how to write meaningful tests items at all levels of Bloom’s Taxonomy – Knowledge, Comprehension, Application, Analysis, Synthesis and Evaluation. Additionally, tips on writing items at various levels of complexity will be discussed.
- James R. Parry, Test Development Manager, U.S. Coast Guard Training Center, Yorktown, VA,
- LT Carlos E. Schwarzbauer, IT Lead, US Coast Guard Force Readiness Command (FC-Tadl)