Tags:
Review1Add my vote for this tag create new tag
view all tags

Review - HotPlanet 2013 Submission

Minor formatting changes were added to reviews (carriage returns) to ensure proper rendering via the verbatim tag in TWiki.

Paper

A. Striegel, S. Liu, L. Meng, C. Poellabauer, D. Hachen, O. Lizardo, "Lessons Learned from the NetSense Smartphone Study," in Proc. of ACM HotPlanet, pp. 51-56, Hong Kong, China, Aug. 2013. Winner: Best Paper Award DOI Slides Review

Review 1

----------------------- REVIEW 1 ---------------------
PAPER: 19
TITLE: Lessons Learned from the NetSense Smartphone Study
AUTHORS: Aaron Striegel, Shu Liu, Lei Meng, Christian Poellabauer, David Hachen and Omar Lizardo

OVERALL EVALUATION: 3 (strong accept)
REVIEWER'S CONFIDENCE: 5 (expert)

----------- REVIEW -----------
This is an interesting paper - it outlines some lessons learned from
running a large smartphone study. It is rather vague on some details,
however, and I recommend that the authors be more specific in order to
make the paper more useful for readers.

The paper is of clear relevance to the workshop and should be of
interest to attendees.

You talk about low response rates. But what kind of rates did you
receive, and what did you expect? Have you looked at the large number
of mobile and smartphone-based experience sampling and diary studies
to get a feel for the usual response rates? e.g.,
doi:10.1007/978-1-4471-4054-2_8 outlines response rates for a study or
doi:10.1145/1409635.1409657 which looks at increasing response rates.

Figure 2 and 4 were confusing because they have different y-axes. It
would be better if both were % of users. This is especially important
since I wasn't exactly sure how many users were in your study. You
refer to 200, but you also say "at or under two hundred" in S3.

"Good user" sounds odd as their behaviour is not intrinsically good or
bad. It would be better to refer to "compliant" users instead.

You are also vague about what constitutes a "good" user. You say
"using the phone for a reasonable portion of the week, and completing
quizzes in a timely manner". What are "reasonable" and "timely"?
Please be specific.

I did not understand some of the quotations. Perhaps this is because I
am not American? What does "I will be glad when their texting is off
of my plan" mean? "off of" is grammatically incorrect and needs a
(sic) but since this is a common reason I assume it is not an actual
quotation. Similarly "the phone was too difficulty to use" needs (sic)
or fixing.

Another confusing sentence: "the use of the phone as an alarm clock
and the distance to the floor from the loft far exceeded the
tolerances of Gorilla Glass". What does a loft have to do with an
alarm clock?

In the conclusions you talk about "subsidy vs ownership". What do you
mean by a subsidy? What are you subsidising? Do you mean subsiding
their existing mobile phone package? Why not just pay the
participants?

Your footnote marks are incorrectly used - they go after the full stop
at the end of a sentence.

It is "SQLite" not "sqlLite".

A random Android comment: why didn't you use GetPackageInfo() instead
of polling for application installation information?

The data sharing section was quite aggressively written - it comes
across as if you do not want to share because it is too difficult.
Sharing the data or at least a subset would be very useful for the
community. I suggest that you speak to CRAWDAD.org about this.

Review 2

----------------------- REVIEW 2 ---------------------
PAPER: 19
TITLE: Lessons Learned from the NetSense Smartphone Study
AUTHORS: Aaron Striegel, Shu Liu, Lei Meng, Christian Poellabauer, David Hachen and Omar Lizardo

OVERALL EVALUATION: 1 (weak accept)
REVIEWER'S CONFIDENCE: 4 (high)

----------- REVIEW -----------
The authors present their experiences of large-scale and long-term data collection from NetSense study. 
The experiences will give lessons for other researchers especially for those who want to collect such data 
in terms of approving, launching, and managing a study of similar kind. The authors describe the issues to 
importantly consider and decide, and the problems arising unexpectedly in each step of the data collection. 

I like the use of quizzes, which I believe is to check actual uses of the auxiliary phone. While it might make 
participants feel like that they are under surveillance, it seems like a ‘practical’ approach to manage large-scale 
and long-term data collection. The authors also report issues from real deployment such as dropping, losing 
of phones, and cracked screen (figure 3), which will be important in planning of future researches.

My concern lies in the presentation; the organization of the paper is not effective, and may lead the readers 
to misunderstand and, even worse, underestimate the contributions of the paper. The authors simply enumerate 
their experiences one by one, and in some places related issues are scattered through the paper. I would like to 
suggest that the authors group their experiences into several conceptually related categories. This will help readers 
easily pick up and reflect important lessons into their own studies. One possible way of grouping would be as following: 

   Issues related to the purpose of study, selection of data, and how to collect them, etc. 
   Issues related to participants such as target users, how to recruit them, privacy concerns, and IRB issues, etc. 
   Issues on the management of the study such as data collection periods, support for long term participation, etc. 

Additionally, I have some questions about the data collection, 
1.   What is the cost of the data-logging application in terms of energy consumption and computation? 
        Performance of mobile devices and the lifetime of battery are important issues for participants. If the logging 
       service requires much of energy and/or computation, the percentage of good users may decrease. 
2.   Are there any issues on the quality of data in terms of completeness and fidelity?
3.   What is the reason for the rapid drop of good users in Jun and Aug 2012? In Figure 4, the percentage of 
        good users dramatically drops in that period and is immediately restored. 

Overall, the paper is interesting and has some important lessons to learn. However, the organization and presentation 
of the paper makes me hesitate to give higher score, and should be much improved!

Review 3

----------------------- REVIEW 3 ---------------------
PAPER: 19
TITLE: Lessons Learned from the NetSense Smartphone Study
AUTHORS: Aaron Striegel, Shu Liu, Lei Meng, Christian Poellabauer, David Hachen and Omar Lizardo

OVERALL EVALUATION: 3 (strong accept)
REVIEWER'S CONFIDENCE: 4 (high)

----------- REVIEW -----------
The paper describes the experiences and lessons learned through a large scale and long term user study 
on smartphone data gathering. The study involved 200 students of the Notre Dame university and run for 
2 years.

I really enjoyed the paper. The authors present details and information that are perhaps not commonly present 
in technical papers, but are valuable for researchers who are interested in running large scale mobile phone 
studies. I found quite daring the authors decision to provide their own phones to all the participants and also 
commit themselves to maintain and replace devices throughout the study. Judging from other experiences 
though, I believe that this is a safer approach for user retainment and more importantly for instrumenting 
devices with the exact data collection software that is necessary. I understand the authors' comment that 
this approach required significant effort from the research team. However, a subsidy model could lead to 
significant effort in maintaining the application for different devices, and addressing possible inconsistencies 
in how the logging application performs across different devices, that may even use the same operating system.

I noticed that in the paper the authors do not make any comments about power consumption. After all the 
phone had to run a number of background tasks through out the study. My understand is that this was 
probably a side effect of giving the users new devices that they haven't used without the background task 
before. Essentially the battery life the ever experienced was the one with the background sensing tasks. 
It would be interesting if the authors could offer their experience regarding this aspect.

Overall a nice paper, and an impressive effort to gather this dataset.
Edit | Attach | Watch | Print version | History: r1 | Backlinks | Raw View | More topic actions
Topic revision: r1 - 2014-03-05 - AaronStriegel
 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2018 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback