We discuss the paper “Checking App Behavior Against App Descriptions” [1] by A. Gorla et al.
Alessandra Gorla: Assistant Researcher Professor at the IMDEA Software Institute in Madrid, Spain.
Ilaria Tavecchia: Data Scientist at Hyper Anna, Sydney, New South Wales, Australia.
Florian Gross: Distributed Systems Engineer (Fraud) at Twitch, Berlin, Germany.
Andreas Zeller: Full Professor for Software Engineering at Saarland University in Saarbrücken, Germany.
The work for this paper took place in Saarland University.
At the time of the publication, the first three authors were research assistants/associates at Saarland University.
Why do this research?
What does the paper do?
For checking implemented app behavior against advertised app behavior, the paper presents CHABADA that:
How does the paper evaluate the presented methods?
RQ1 Can CHABADA effectively identify anomalies (i.e. mismatches between description and behavior) in Android apps?
RQ2 Can CHABADA be used to reveal malicious Android apps?
What does the paper find?
CHABADA is able to find several examples of false advertising, plain fraud, and other questionable behavior. After the investigation of RQ1, authors found in 160 top outliers: 26% malicious, 13% dubious, and 61% benign apps.
CHABADA is effective as a malware detector. After the investigation of RQ2, authors found that CHABADA detects the majority of malware (i.e. it correctly identified 56% of the malicious apps), even without knowing existing malware patterns.
Why are the results important?
How do you find the presentation of the results?
What can we really do with these results?
What do you think about the related work?
What do we think about this paper in general?
How could we improve this paper ourselves?
Can you easily replicate this study?
Did the future work ever happen?
To be filled in after discussion!
To be filled in after discussion!