posted by user: ultimatebeaver || 10430 views || tracked by 16 users: [display]

L@S 2017 : L@S 2017: Fourth Annual ACM Conference on Learning at Scale

FacebookTwitterLinkedInGoogle

Link: http://learningatscale.acm.org/las2017/
 
When Apr 20, 2017 - Apr 21, 2017
Where Massachusetts Institute of Technology, C
Submission Deadline Oct 25, 2016
Notification Due Dec 14, 2016
Final Version Due Feb 10, 2017
Categories    moocs   educational data mining   data mining   learning analytics
 

Call For Papers

The goal of this conference is to promote scientific exchange of interdisciplinary research at the intersection of the learning sciences and computer science. Inspired by the emergence of Massive Open Online Courses (MOOCs) and the accompanying huge shift in thinking about education, this conference was created by ACM as a new scholarly venue and key focal point for the review and presentation of the highest quality research on how learning and teaching can change and improve when done at scale.

MIT’s Office of Digital Learning (ODL) aims to transform teaching and learning at MIT and around the globe through the innovative use of digital technologies. ODL extends MIT’s mens et manus (mind and hand) approach to digital learning, uniquely combining digital tools with individualized teaching, research-driven methodology, an ethos for open sharing, and the in-person magic of MIT — for students at MIT, and for learners around the world. Through its many strategic education initiatives, ODL collaborates closely with international governments and organizations in developing new technologies and systems that allow increased participation and quality in education. We are proud to host the Learning at Scale conference next year. Come join us!

------------------------------------

Learning at Scale investigates large-scale, technology-mediated learning environments with many learners and few experts to guide them. Large-scale learning environments are incredibly diverse: massive open online courses (e.g. from edX or Coursera, or connectivist MOOCs), intelligent tutoring systems (e.g. Dreambox or Cognitive Tutor), open learning courseware (e.g. MIT’s OpenCourseware), learning games (e.g.DragonBox), citizen science communities (e.g. Vital Signs), collaborative programming communities (such as Scratch), community tutorial systems (e.g. StackOverflow), shared critique communities (such as DeviantArt), and the countless informal communities of learners (e.g.the Explain It Like I’m Five sub-Reddit) are all examples of learning at scale. These systems either depend upon large numbers of learners, or they are enriched through use of data from previous use by many learners. They share a common purpose--to increase human potential--and a common infrastructure of data and computation to enable learning at scale.

Investigations of learning at scale naturally bring together two different research communities. Since the purpose of these environments is the advancement of human learning, learning scientists are drawn to study established and emerging forms of knowledge production, transfer, modeling, and co-creation. Since large-scale learning environments depend upon complex infrastructures of data storage, transmission, computation, and interface, computer scientists are drawn to the field as powerful site for the development and application of advanced computational techniques. At its very best, the Learning at Scale community supports the interdisciplinary investigation of these important sites of learning and human development.

The ultimate aim of the Learning at Scale community is the enhancement of human learning. In emerging education technology genres (such as intelligent tutors in the 1980s or MOOCs circa 2012), researchers often use a variety of proxy measures for learning, including measures of participation, persistence, completion, satisfaction, and activity. In the early stages of investigating a technological genre, it is entirely appropriate to begin lines of research by investigating these proxy outcomes. As lines of research mature, however, it is important for the community of researchers to hold each other to increasingly high standards and expectations for directly investigating thoughtfully-constructed measures of learning. In the early days of research on MOOCs, for instance, many researchers documented correlations between measures of activity (videos watched, forums posted, clicks) and other measures of activity, and between measures of activity and outcome proxies including participation, persistence, and completion. As MOOC research matures, additional studies that document these kinds of correlations should give way to more direct measures of student learning and of evidence of instructional techniques, technological infrastructures, learning habits, and experimental interventions that improve learning. As a community, we believe that that the very best of our early papers define a foundation to build upon but are not an established standard to aspire to.

We encourage diverse topical submissions to our conference, and example topics include but are not limited to the following topics. In all topics, we encourage a particular focus on contexts and populations that have been historically not well served.

1. Novel assessments of learning, drawing on computational techniques for automated, peer, or human-assisted assessment
2. New methods for validating inferences about human learning from established measures, assessments, or proxies.
3. Experimental interventions in large-scale learning environments that show evidence of improved learning outcomes
* Evidence of heterogenous treatment effects in large experiments that point the way towards potential personalized or adaptive interventions
* Domain independent interventions inspired by social psychology, behavioral economics, and related fields with the potential to benefit learners in diverse fields and disciplines
* Domain specific interventions inspired by discipline-based educational research that have the potential to advance teaching and learning of specific ideas, misconceptions, and theories within a field
4. Methodological papers that address challenges emerging from the “replication crisis” and “new statistics” in the context of Learning at Scale research:
* Best practices in open science, including pre-planning and pre-registration
* Alternatives to conducting and reporting null hypothesis significance testing
* Best practices in the archiving and reuse of learner data in safe, ethical ways
* Advances in differential privacy and other methods that reconcile the opportunities of open science with the challenges of privacy protection
5. Tools or techniques for personalization and adaptation, based on log data, user modeling, or choice.
6. The blended use of large-scale learning environments in specific residential or small-scale learning communities, or the use of sub-groups or small communities within large-scale learning environments
7. The application of insights from small-scale learning communities to large-scale learning environments
8. Usability studies and effectiveness studies of design elements for students or instructors, including:
* Status indicators of student progress
* Status indicators of instructional effectiveness
* Tools and pedagogy to promote community, support learning, or increase retention in at-scale environments
9. Log analysis of student behavior, e.g.:
* Assessing reasons for student outcome as determined by modifying tool design
* Modeling students based on responses to variations in tool design
* Evaluation strategies such as quiz or discussion forum design
* Instrumenting systems and data representation to capture relevant indicators of learning.
10. New tools and techniques for learning at scale, including:
* Games for learning at scale
* Automated feedback tools (for essay writing, programming, etc)
* Automated grading tools
* Tools for interactive tutoring
* Tools for learner modeling
* Tools for representing learner models
* Interfaces for harnessing learning data at scale
* Innovations in platforms for supporting learning at scale
* Tools to support for capturing, managing learning data
* Tools and techniques for managing privacy of learning data

Related Resources

BDCAT 2024   IEEE/ACM Int’l Conf. on Big Data Computing, Applications, and Technologies
ECAI 2024   27th European Conference on Artificial Intelligence
CIIS 2024   ACM--2024 7th International Conference on Computational Intelligence and Intelligent Systems (CIIS 2024)
ACM-Ei/Scopus-CCISS 2024   2024 International Conference on Computing, Information Science and System (CCISS 2024)
ICDM 2024   IEEE International Conference on Data Mining
ICMLA 2024   23rd International Conference on Machine Learning and Applications
CVIV 2024   2024 6th International Conference on Advances in Computer Vision, Image and Virtualization (CVIV 2024) -EI Compendex
ECML-PKDD 2024   European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases
DSIT 2024   2024 7th International Conference on Data Science and Information Technology (DSIT 2024)
CCBDIOT 2024   2024 3rd International Conference on Computing, Big Data and Internet of Things (CCBDIOT 2024)