posted by organizer: peisert || 5580 views || tracked by 10 users: [display]

LASER 2014 : Workshop on Learning from Authoritative Security Experiment Results

FacebookTwitterLinkedInGoogle

Link: http://www.laser-workshop.org
 
When Oct 15, 2014 - Oct 16, 2014
Where Arlington, VA
Submission Deadline Jul 11, 2014
Notification Due Aug 25, 2014
Final Version Due Nov 17, 2014
Categories    security   science   experiment
 

Call For Papers

Call for Papers

2014 LASER Workshop - Learning from Authoritative Security Experiment Results

http://www.laser-workshop.org

Arlington, Virginia - October 15-16, 2014

Paper submissions extended to 11 July 2014 (your time zone)

The LASER workshop invites papers that strive to exemplify the practice of science in cyber security. The goal of this series of workshops, now in its third year, is to address the practice of good science. We encourage participants who want to help others improve their practice and participants who want to improve their own practice. LASER welcomes papers that are:

- Exemplars of the practice of science in cyber security.
- Promising works-in-progress that would benefit from expert feedback.

LASER seeks to foster a dramatic change in the paradigm of cyber security research and experimentation. Participants will find LASER to be a constructive and highly interactive venue featuring informal paper presentations and extended discussions. To promote a high level of interaction, attendance is anticipated to be limited to about 40 people. However, to support a high level of student participation, this limit may be increased.

Please send all questions to info@laser-workshop.org.

(Hitchens' razor) What can be asserted without evidence can
be dismissed without evidence. -- Christopher Hitchens.

Workshop Goals
--------------

While everyone has a notion of what science is -- *sound* science is neither widely nor well-practiced. The goal of this forum is to help the community change that.

To effect such change, we need to be specific about what constitutes *sound* science, and about what it means to practice it. "Science" is the process of linking facts and fact-based theory, across disciplines, to create a common framework of understanding. Such frameworks can be informed by both positive and negative results. While the scientific process clearly includes both theory and practice (where practice is usually in the form of experimentation), LASER's primary focus is on experimentation. There are three essential issues, each of which will play a critical role in determining the suitability of papers submitted:

(1) Is it science? How persuasive is the author's (possibly implicit) claim that the work is scientific, i.e., that it links theory and practice (via experimentation)? What is the theory and how does the experiment inform it, or vice versa, what is the experiment, and how is it informed by theory?

(2) Is it well executed? A paper or project can have a decidedly scientific bent, as previously defined, and yet be poorly executed. Good (experimental) scientific execution includes: a clear statement of a research question with an explicit claim or hypothesis or problem being solved; an experimental study type or design suited to addressing that research question (e.g., observational, random controlled trial, etc.); adequate sample size; proper statistical analysis; freedom from confounds; sound methodology; rigorous collection of data (where appropriate); a reproducible experiment; and sensible, justifiable conclusions that are well supported by experimental evidence.

(3) Is it well reported? Any scientifically sound experiment is of little use if it is not properly and thoroughly reported. A good report/paper includes at least a structured abstract (details below); introduction and background; related work; problem being solved and/or hypothesis being tested; experimental methods (where appropriate, methods include apparatus and instrumentation, materials, subjects/objects of study, instructions to subjects, design, procedure); data; analysis; results; discussion; and conclusion.

Referees will be asked to judge the merits of papers regarding the following:

- Is the title appropriate for the work described?
- Is there a structured abstract?
- Does the paper contain all of the sections you would expect (Introduction, Related work, Theory, Methods, Analysis, etc.)?
- Is there a clear statement of the problem being solved or the hypothesis being tested or the research question being asked?
- Does the author do a good job of synthesizing the literature?
- Is the experimental work reproducible?
- Is the methodology clearly explained (including details noted above)? A complete methodology bears on reproducibility.
- Is the paper well-organized?
- Are the sections well-developed?
- Does the author answer the questions s/he sets out to answer?
- Does this science represent a meaningful contribution to the literature?
- Does the theory connect to the data?
- Is the paper well-written and easy to understand?
- Are you convinced by the author's results? Why or why not?

Referees are instructed to base their judgments on the scientific merits of the papers, consistent with the goals of the workshop.

The best papers meeting the above criteria will be accepted for presentation and publication. Papers falling modestly short of these criteria will be accepted for a works-in-progress embedded workshop and tutorial on executing and reporting good science. Please note that all guidelines in this Call for Papers apply equally to papers in plenary and works-in-progress sessions.

Quod gratis asseritur, gratis negatur.
"What is asserted without reason may be denied without reason."


Embedded Works-in-Progress Workshop
-----------------------------------

To help authors improve the scientific soundness of their research in the interest of their submitting papers in the future, to LASER or to other venues, there will be experienced senior researchers that will interact with the presenter as well as the audience to contribute constructive feedback on how to make the work more rigorous and more scientifically sound.


Travel Scholarships
-------------------
Travel support is available in modest amounts for students in need.

The LASER workshop is funded in part by NSF Grant #1143766 and by the Applied Computer Security Associates (ACSA).


Important Dates
---------------

June 30 Papers due
August 25 Authors notified of accepted/rejected full papers
September 22 Pre-conference versions of full papers due
October 15-16 2014 LASER Workshop
November 17 Post-conference versions of full papers due


Submission and Review Process
-----------------------------

Papers including structured abstract are solicited (see guidelines below). Papers follow a typical pattern of submission, review, notification, pre-conference version, conference presentation, and final post-conference version.

All papers must be submitted via OpenConf: http://www.openconf.org/laser2014/

At least one author from every accepted paper must attend the workshop and present the paper.

See http://www.laser-workshop.org for full and up-to-date details on the workshop. Please direct all questions to info@laser-workshop.org.


Paper Guidelines
----------------

- Submissions should be 6-10 pages, inclusive of tables, figures, and references.
- All pages must be numbered.
- Papers must be submitted in PDF format. Do NOT submit files in Word, WordPerfect, LaTeX or other word processing format.
- Pages should fit on USA-style 8.5x11 inch paper. All text and figures must fit a text block 6.5" wide x 9" deep.
- All text must be 10 point type on 12 point (single-spaced) leading, two-column format, and Times Roman or a similar font for the body of the paper.
- Figures and tables should be legible when printed, without requiring magnification.
- If using Microsoft Word or LaTeX, use the appropriate USENIX template and sample first pages (two-column format) from the USENIX templates page: https://www.usenix.org/templates-conference-papers


Structured-Abstract Guidelines
------------------------------

Every paper should start with a structured abstract of roughly 150-350 words. The abstract should contain concise statements that tell the whole story of the study, presented in a consistent structure that facilitates quick assessment as to whether or not the paper may meet the reader's needs and warrant reading the full paper. Essential elements of structured abstracts are background, aim, method, results, and conclusions:

- Background. State the background and context of the work described in the paper.
- Aim. State the research question, objective, or purpose of the work in the paper.
- Method. Briefly summarize the method used to conduct the research, including the subjects, procedure, data, and analytical method.
- Results. State the outcome of the research using measures appropriate for the study conducted. Results are essentially the numbers.
- Conclusions. State the lessons learned as a result of the study and recommendations for future work. The conclusions are the "so what" of the study.

By using this format for an abstract, the author has a good structure not only for his or her paper but also for creating slides to present the work.

Here is an example abstract from the citation below (140 words) of a LASER 2012 paper:

Kevin S. Killourhy and Roy A. Maxion. 2012. Free vs. transcribed text for keystroke-dynamics evaluations. In Proc. of the 2012 Workshop on Learning from Authoritative Security Experiment Results (LASER '12). ACM, New York, NY, USA, 1-8.

- Background. One revolutionary application of keystroke dynamics is continuous re-authentication: confirming a typist's identity during normal computer usage without interrupting the user.

- Aim. In laboratory evaluations, subjects are typically given transcription tasks rather than free composition (e.g., copying rather than composing text), because transcription is easier for subjects. This work establishes whether free and transcribed text produce equivalent evaluation results.

- Method. Twenty subjects completed comparable transcription and free-composition tasks; two keystroke-dynamics classifiers were implemented; each classifier was evaluated using both the free-composition and transcription samples.

- Results. Transcription hold and keydown-keydown times are 2-3 milliseconds slower than free-text features; tests showed these effects to be significant. However, these effects did not significantly change evaluation results.

- Conclusions. The additional difficulty of collecting freely composed text from subjects seems unnecessary; researchers are encouraged to continue using transcription tasks.

Organizing Committee:
---------------------

Laura Tinnel (SRI International), General Chair
Tiffany Frazier (Apogee Research), Program Co-Chair
Roy Maxion (CMU), Program Co-Chair
David Balenson (SRI International), Treasurer/Local Arrangements
Sean Peisert (UC Davis, Lawrence Berkeley National Lab), Publicity
Kevin Butler (University of Oregon), Publications
Nathanial Husted (Indiana University), Broadcasting/Video
Carrie Gates (Dell), Advisor
Greg Shannon (CMU/CERT), Advisor

Preliminary Program Committee:
------------------------------

Tiffany Frazier (Apogee Research), Co-Chair
Roy Maxion (CMU), Co-Chair
David Balenson (SRI International)
Matt Bishop (UC Davis)
Sadie Creese (Oxford University)
Richard Ford (Florida Institute of Technology)
Evan Fortunato (Apogee Research)
Greg Frazier (Apogee Research)
Frank Greitzer (PsyberAnalytix)
Shing-hon Lau (CMU)
Tom Longstaff (JHU/APL)
John McHugh (UNC)
Aad van Moorsel (Newcastle University)
Sean Peisert (UC Davis, Lawrence Berkeley National Lab)
Angela Sasse (University College London)
Kymie Tan (JPL)
Laura Tinnel (SRI International)

Related Resources

From Data to Decision: Empowering Ecosys 2025   The International Society for Ecological Modelling Global Conference:
EEI 2024   10th International Conference on Emerging Trends in Electrical, Electronics & Instrumentation Engineering
Security 2025   Special Issue on Recent Advances in Security, Privacy, and Trust
Insights 2024   Fifth Workshop on Insights from Negative Results in NLP
MSEJ 2024   Advances in Materials Science and Engineering: An International Journal
IEEE-Ei/Scopus-ITCC 2025   2025 5th International Conference on Information Technology and Cloud Computing (ITCC 2025)-EI Compendex
ACNS 2025   23rd International Conference on Applied Cryptography and Network Security - deadline 2
SPIE-Ei/Scopus-DMNLP 2025   2025 2nd International Conference on Data Mining and Natural Language Processing (DMNLP 2025)-EI Compendex&Scopus
IEEE-Ei/Scopus-CNIOT 2025   2025 IEEE 6th International Conference on Computing, Networks and Internet of Things (CNIOT 2025) -EI Compendex
IJCSITY 2024   International Journal of Computational Science and Information Technology