Research

Imperfect Forward Secrecy: How Diffie-Hellman Fails in Practice

Link: https://weakdh.org/imperfect-forward-secrecy-ccs15.pdf
Summary:

This paper looks at the popular Diffie-Hellman key exchange protocol and find that it is less secure than most people assume it to be. This lack of security to the researcher's credit is a TLS protocol flaw rather than a vulnerability meaning that it is at the core of the protocol and cannot be easily patched overnight. The study figures that modern security standards should be above the 1024 bit standard that can reasonably be cracked by the national security agency and other nation states. The paper goes so far as to even pull up leaked NSA diagrams detailing how the agency might have possibly used a similar method in order to decrypt VPN traffic in a passive manner.

 

What I liked:

  1. The paper is really pertinent given its widespread use and the NSA disclosures post Snowden

  2. The paper does a good job of explaining the process - I especially liked the visual diagram which put a picture in my head

  3. The attack focuses on an inherent flaw in the protocol compared to a quick patch which makes the paper more impressive

  4. The level of collaboration between all the different authors. This is probably the most diverse group of researchers that I've seen reading papers this year.

  5. I like how the paper specifically went into the breakdown of which percentage of servers are vulnerable ie 82% apache 10% ssl etc

 

What I didn't like:

  1. The paper focuses a lot on nation state level attacks, I don't think its possible to really defend against a nation state

  2. It would have been interesting to compare servers by company, and see if Akamai or Cisco or whatever company is more vulnerable and why. But the paper didn't really dig too much into this

  3. The paper never really conceptualized the scale of the attack in terms of user traffic only saying like 7% of 1000000 top trafficked websites instead of 1.2 million users a day which I think is a worst way to describe the scale of the problem

  4. I think it was useless going into the NSA documents, there a lot of different ways they could have broken into the VPN networks

  5. I think a major flaw of this study is that it completely neglects a Diffie-Hellman exchange over elliptic curve groups, which are a lot more secure to these types of attacks

  6. I really would have liked the paper to go more into the optimization techniques to find a more efficient way to reduce the time it takes to do this attack -- I think that's where the academic value of this paper lies

Points for Discussion:

  1. What might be an alternative short term fix in order to keep the current networking methods? Moving over to different cypher suites?

  2. When is the industry planning to update to the new standard of 2048 bits?

  3. Under the assumptions made in this paper -- for how long could the NSA have been passively listening into VPN networks

  4. What defines an academic team that is able to break 768 bit groups - computing powers? How's this different from the enthusiast hacker out there?

  5. How hard would it be to patch the 512 bit group vulnerability caused by the flaw in TLS?

New Ideas:

  1. Explore possible cases where this attack may have been implemented by nation states? Are there any symptoms this type of attack gives off?

  2. Study the rollout of past internet protocols and their subsequent adoption

  3. Make a better data visualization of how many users are vulnerable to this type of attack in order to get a better view of the issue

  4. How does this vulnerability affect specific industry - we can see which industries are more proactive about these types of attacks

  5. Explore the possible economic impacts in monetary terms in a cost benefit analysis for a possible attacker. What type of data will they steal, how much is it worth to them if they sold it

The Matter of Heartbleed

Link: https://jhalderm.com/pub/papers/heartbleed-imc14.pdf

Summary:

This paper was more of a summary of knowledge on the heartbleed attack a vulnerability that was found by Google researcher Neel Mehta in March of 2014. Essentially the attack takes advantage of something known as a Heartbeat Extension which allows either end point of a TLS connection to detect whether its peer is still present. The heartbeat is essentially set from one peer to another, and on reciept of a heartbeat message the peer is supposed to respond back with a similar heartbeat message confirming it is still there. The attack took advantage of this and as a result this paper studied the immediate response and response in the weeks after this was disclosed to the world.

 

 

What I liked:

  1. The study didn't only look at who made initial responses to Heartbleed but also found that a surprisingly low number of sites (25%) ended up replacing certificates

  2. I really liked the paper's explanation of what heartbeat was, and its original purpose -- they did a good job explaining its use as a positive and then transitioning into how it could be exploited

  3. Good job explaining why it took so long to update servers, they specifically note that this isn't something that you can patch with a configuration file but instead need to recompile openssl with a flag which is harder to do

  4. They use a lot of different sources that kind of paint a picture of what was going on in the time, their use of many sources gives us a lot of data from different views

  5. I liked the use of visuals specifically the graphs that highlight the days of disclosure vs percentage of patches out on the internet

  6. I thought the inclusion of the notification study and follow up with network operators was probably the most helpful piece of the study.

 

 

What I did not like:

  1. I think the paper was more of a SOK then adding anything new to the community

  2. I think the way they estimate who was vulnerable to Heartbleed prior to disclosure needs more work. They currently estimate that 24%-55% of servers were vulnerable but don't go into huge specific how they got that number or explain why there's such a huge variation

  3. I would have wanted more specifics about sites and their industry to see which industries are the ones on top of security and which ones are lagging

  4. I don't like that they took a stance on whether or not this bug was exploited prior to disclosure -- I think that they don't really state what they are looking for so its kind of like chasing a ghost

  5. I think the methodology of how they scanned for which servers were vulnerable wasn't really the best given the high amount of false positives the method gives -- would a survey of operators been better?

 

 

Points for Discussion

  1. Why are bugs found at the same time. A private cyber firm found the heartbleed bug about the same time as Neel Mehta at google? Same thing happened at Intel? Is this because of leaks or because of coincidence.

  2. Was there a faster way to deliver patches to the general public

  3. Was there enough of a lead time on Heartbleed for major websites to be expected to have a valid fix.

  4. Were the sources the paper used accurate given that they came from so many different sources. As a result the data could have been measured differently

  5. Are there leaks of this in the snowden papers or did this catch the NSA by surprise as well?

 

 

New Ideas:

  1. Study why and how different the estimates were for servers affected by heartbleed. Dig into why there is such a big variation

  2. Look into creating a measure of how easy something is to exploit. Ie hacker vs nation state capabilities because the paper notes that the bug is easy to exploit, but easy is subjective

  3. Explore how responsible disclosure might have prevented more attacks especially on something as widespread as heartbleed.

  4. Explore how public knowledge sped or slowed down the patching of major websites

  5. Is there a way this patch could have been done without recompiling the entire server with a specific flag? This could have sped up patching

Foreshadow: Extracting the Keys to the Intel SGX Kingdom


Link: https://www.usenix.org/system/files/conference/usenixsecurity18/sec18-van_bulck.pdf

Summary:

 This paper focused on an alternative method to exploit the SGX secure hardware that comes standard in all intel chips post 2013. This paper takes advantage of speculative execution, a protocol in which commands are executed out of order. This causes indirect access to memory for users who should not have this access. Once this mistake happens there are a number of ways that this can be used to extract data from the secret, going so far as to actually get cryptographic key information. More importantly though is the fact that a lot of these vulnerabilities are build into the micro-architecture of the chips so they are extremely hard to patch. This paper has me wondering though how the researchers came across this topic, especially with two similar papers being published during the embargo period within weeks of each other. 

What I liked:

  1. The paper is very topical with the intel SGX vulnerabilities recently

  2. The paper is a very general attack that doesn't go into basic assumptions that need to be in the system.

  3. The paper's attack mechanism doesn't require root access - which kind of prevents it from being able to defend against internal attacks ie when the admin is malicious - also the fact the researchers were able to extract full cryptographically keys

  4. I liked the fact they gave a really good example of when this problem might be used in cloud attacks ie "co-residing cloud tenants" could be attack vectors which is something I didn't even imagine

  5. The paper was very weak on mitigation techniques for Intel, but like that kind of proves how good their attack plan was

What I didn't like:

  1. The paper was released concurrently with the patches - I think that might have been a little bit too soon

  2. The paper didn't go into enough depth on the breaking the SGX sealing and attestation. It left me with the questions regarding the security behind the sealing

  3. The paper kind of jumps in with the idea that everyone knows what speculative execution is - given this is a fairly new concern I'd like it if they explained it out more

  4. I wish they published more in depth details about how they were able to execute an attack like this - like it would be amazing in the real world if they posted code but given the magnitude of the vulnerability I understand why they may not have

  5. This paper could have used a lot more visuals, especially when explaining more about the very dry aspects of caches and how their approach compromises the safeguards that are currently in place

Points of Discussion:

  1. How has the response from large cloud service providers who use SGX (ie Microsoft and IBM) deferred from their smaller startups?

  2. How do Block Chains rely on this secure hardware specifically?

  3. Is the debug enclaves in the production version systems in every computer or are they specific to ones Intel had during testing

  4. How did researchers stumble on to this type of vulnerability?

  5. Has the proprietary nature of Intel's chips helped or hurt the security of its systems.

New Ideas:

  1. Explore how the attestation is arbitrary if Intel has a centralized service for it

  2. Map out the current systems currently in use that haven't been patched for these bugs

  3. Would slowing down the CPU speeds prevent speculative execution? To what degree and how would this compare to current mitigation techniques

  4. How might Intel need to change the micro-architecture of future chips in order to prevent against similar attacks in the future

  5. Is there an approach that can fix the vulnerabilities in architecture over the air ?

Meltdown: Reading Kernel Memory from User Space

Link: https://www.usenix.org/system/files/conference/usenixsecurity18/sec18-lipp.pdf

Summary:

Modern computer systems depend on the kernel being non-accessible, however this paper written by a multitude of authors and teams turns this conception on its head proving that their Attack "meltdown" can exploit side effects of out of order attacks in order to get private data. The prevalence of out of order attacks in modern systems makes this paper even more relevant, given the vulnerability exists in almost every computer in the world. Thankfully however this paper explores mitigation techniques that were developed for other reasons such as the Kaiser defense, which inadvertently can defend against these types of exploits to some degree.

What I liked:

  1. The paper details meltdown which is a vulnerability that doesn't go after a vulnerability in software

  2. The paper looks into mitigation techniques such as the KAISER defense mechanism for KASLR

  3. The paper presents a really interesting end to end attack which looks at the different facets of how an attack would  really happen out in the wild.

  4. The paper goes into not only a raw attack but also talks about ways they can optimize the attack

  5. The explanation of why Kaiser defends against aspects of Meltdown was a very interesting addition

 

What I didn't like:

  1. This attack is very specific to out of order execution programs - which is starting to become a common vulnerability as we saw in the last paper

  2. This attack doesn't work on all windows machines, only a subset of them

  3. The mitigation techniques aren't novel ie, we've already deployed them for a different reason

  4. The paper doesn't explain at all why this attack doesn't attack ios - does it have to do with the fact that apple builds their chips differently

  5. When it comes to asking questions there's a lot of people and teams who worked on this paper - so it might be a challenge finding the right person to get in to contact with

 

Points for Discussion:

  1. How did the discovery of KASLR differ from the discovery of Meltdown

  2. Why is out of order prevalent in modern cores from an architecture level as opposed to older cores

  3. What have virtual environments done in the aftermath of Meltdown disclosures in order to secure their services

  4. Why is there a difference in the vulnerability when it is run on Linux as opposed to Windows

  5. Has there been any documented Meltdown attacks on Android

 

New Ideas:

  1. Compare the architecture of Apple chips to intel ones specifically in the context of attacks such as KASLR and Meltdown - why didn't apple fall into the same pitfall as intel

  2. Is there an alternative way in order to segment enclave memory to prevent these attacks

  3. Map other CPU's that might share similar designs and see if they fall into similar attacks

  4. Study the prevalence of Meltdown attacks in android and compare to systems that had timely patches

  5. Is there a way to attack the supervisor bit on the processor to get access to restricted areas?

EnclaveDB: A Secure Database using SGX

Summary: EnclaveDB is a database engine that guarantees confidentiality, integrity, and freshness for data and queries even when all other actors including the database administrator is a malicious actor. The novel way this can happen is through a small trusted computing base which includes in memory storage and precompiled procedures. I think the real beauty in this technology was the fact that they were able to do it with minimal synchronization of threads - which honestly I would find a challenge to do.

 

What I liked:

  1. The attacker model for this paper is amazing, the study authors managed a way to maintain the integrity of the database even when the database admin is malicious

  2. The paper explains really well why current methods for property preserving encryption end up failing or not holding up as well as this solution

  3. The system maintains the programming model similar to conventional relational databases so they are not reinventing the wheel.

  4. I found it interesting the system allows for remote attestations - which seems very similar to the literature I've read on distributed ledgers and block chain systems

  5. The protocol requires minimal synchronization between threads - which explains why the system is able to maintain freshness of data

 

What I didn't like:

  1. Even though the security leaps offered by this system are huge, I think that 40% more overhead might be too much to supplant traditional database systems.

  2. The system requires specific hardware (SGX) which means that systems need to replace their current hardware in order to use this system

  3. I'm not sure how practical it is to assume that we can host huge amounts of memory in DRAM just because the prices of it are falling

  4. The system uses the Intel Attestation verification service to check if a given quote has been signed by a valid attestation key --> From what I understand if this system fails it could end up compromising the entire system. So it could possibly have a single point of failure

  5. I'm not really 100% sure how the remote challenger can establish trust in an enclave without a lot of really intensive ZKPs

 

 

 

Points for Discussion:

  1. How different are these secure enclaves from the one's used by Apple?

  2. How could someone spoof an enclave in an untrusted database

  3. Instead of creating a specific area in an untrusted database, why not work on finding a secure data base to work on?

  4. How is the Merkle Tree used in EnclaveDB different than the one used in BlockChains

  5. What percentage overhead for a secure database system like this is acceptable for wide scale adoption in the enterprise community?

 

New Ideas:

  1. Explore new ways to audit results from this database

  2. Building of the distributed attestation might be really interesting because each distributed system could verify in a different way whether the enclave really is an enclave. It could also distribute the computation of a ZKP

  3. From a hardware POV is there a way to designate secure memory only for a specific function?

  4. How can distributed methods of memory tables be used in systems like Block Chains

  5. Analyze the adoption of systems like these and their user groups

 

 

Iron: Functional Encryption using Intel SGX

Summary: This paper covers a Iron a really powerful functional encryption technique. The reason this paper is really ground breaking is the fact that it allows for a faster more practical version of functional encryption which is a landslide faster than current members. The problem with this paper that makes it hard to analyze is the fact that its built on SGX which is proprietary and as a result security researchers would not have the same access to scrutinize, compared to say open source software. The reason this technology is cool, is because it could pave the way for more sharing of data while at the same time protecting that data from misuse. Now if someone wants to have banking information on the cloud or with some entity - they can create keys that share only the data they agree to share with another entity improving overall security.

 

What I liked:

  1. Functional Encryption is a technology that has a lot of potential applications if the technical and feasibility aspects of it can be worked out

  2. This method runs functional encryption at full processor speeds which has been a challenge to accomplish in past studies

  3. The method works well on complex functions and might even be better the more complex the function is?

  4. The system doesn't put all its hope in the trust offered by SGX and treat it as a black box - the study looks at it from the pov that there are limitations of SGC

  5. The study has a very good explanation of the relevant  SGX knowledge needed to understand this paper.

 

What I didn't like:

  1. There might be a single point of failure in the attestation system which secures enclaves

  2. There are ways to spoof the request for keys that the paper doesn't cover

  3. I'm not sure how realistic it is for the enclave to erase everything relevant to its state from memory

  4. This system requires 3 different secure enclaves - is it possible to do it with 2 or less?

  5. SGX isn't open source so it might be hard to evaluate it

Points for Discussion:

  1. Is there room from a hardware POV for future improvements to functional encryption

  2. 20 years down the line say if the encryption encoding functional encryption is broken, is there any precautions that should be taken encoding the core data

  3. Has there ever been a documented failure of Intel's verification attester?

  4. How comprehensive can a security study of SGC products be given that the core tech is proprietary

  5. Are there any other comparable technologies that have different implementations which can be used as the basis for future functional encryption.

New Ideas :

  1. Possible audit methods for making sure the technology gives you the right answer using rudimentary systems maybe CSD?

  2. Explore side channel attacks which might not have been covered in the paper

  3. Develop Iron for an open source environment such as Sanctum

  4. Encrypt data even more before putting it into functional encryption

  5. What regulations need to be in place for the wide scale adoption of functional encryption

Arbitrum: Scalable, private smart contracts


Summary: I think that certain blockchain ideas get a little bit too much hype and this is definetly one of them. Arbitrum claims to be a step up on ethereum by offering smart contracts that have the ability to scale. But I think there are some very fundamental problems with the way this system is implemented which leaves it open to DOS attacks. Furthermore I think the system as a whole places too much power in the hands of third parties, while system like Ethereum only depend on the code. I think that this is my least favorite paper so far of the year

 What I liked:

  1. The system works through off the chain approach to figuring out how to conduct transactions -- very good because of issues with latency that plagues normal blockchains - there's a push in blockchains to move off the chain

  2. The system for what it's worth is some what cheaper to manage because not everyone is doing the same computations

  3. The paper claims to solve scalability issues for EVM smart contracts, its debatable whether it actually does this but the fact they are focusing in on the issue is pretty admirable.

  4. I think there's a lot of customization in the VM options which is something you don't get when working with the standard EVM

  5. It emulates what I think would be a reasonable human system in the 20th century - its something I could explain to my grandma decently well.

 

What I didn't like:

  1. Ethereum and EOS is probably the industry standard - deviating away from what everyone else is doing doesn't really help improve security. I'd prefer if they built an over the top system

  2. I think there's too much power given to the verifiers in this system - it’s a little bit too centralized to scale well in practicality

  3. The verification of checking proofs every time there is a dispute could be a little bit challenging - for one who determines what's correct? IE going back to the split between Ethereum vs Ethereum Classic where both sides had a correct answer

  4. I think it is very likely you could overwhelm a manager and make them lose out timing - the fact that this type of system needs human intervention makes me very skeptical of it

  5. I don't like the idea of negative incentives - ie penalizing someone who challenging something. At worse I think you should be no worse than you started.

 

Discussion Points:

*Just as a general note -- this paper left a lot of questions in my mind which I guess will end up being solved through usage if this ever gets deployed in the real world

  1. Under what circumstances would a malicious actor be able to take control of the smart contract

  2. Is it possible to add new managers to the contract while the contract is already alive, ie for a company appointing a new board member

  3. Does a manager need to be online 24/7 in order to make sure their vm is working correctly and as intended

  4. How do you make a legal system that has no consensus method that is baked in? That just puzzles me a bit, because they claim its platform agnostic but like I think that there should be some for of defauly

  5. Are there any ambiguous cases where a verifier could split both ways?

 

New Ideas:

  1. Making a version where you take out the verifier and just have the vm decide what to do (this ends up being very similar to what we have in ethereum) -- the reason is because people are ambiguous code is not

  2. Attack model of just creating an infinite amount of disputable assertions - I don't think this system will end up holding against it

  3. Create a way to predict before the program is run how much the VM will spend trying to run the program

  4. How much more efficient would this system be if the customization of VM's were stripped?

  5. Study adoption methods within the wider smart contract community

Experimental Security Analysis of a Modern Automobile

Link to the Paper: http://www.autosec.org/pubs/cars-oakland2010.pdf

Summary:

This paper explains the transition of how cars which were once entirely mechanical devices have made the transition into the 21st Century to become more digital and computerized. The reality of this transformation is the number of subsystems that contain computers. When the first computers were introduced into cars it was in response to regulation such as the Clean Air Act which required pollution control. But since then most cars now have 60-70 Electronic Control Units which contain thousands of lines of code. This paper looks at how to exploit 2, 2009 vehicles in 3 different states.

 

What I didn't like:

  1. The study only explored 2 cars - both of which were form 2009 which begs the study could be outdated

  2. The study noted that modern EV's are much more prone to having a lot more ECU's given their specific hardware requirements but never really flushed this out more. I think this was bad given that EV's are supposed to be the future of automobiles.

  3. The study says its not clear if auto designers made their systems in expectation of an adversary - I think that this could have been solved with a very easy survey that could have been included in this study

  4. Study is really general ie "Looking forward, we discuss the complex challenges in addressing vulnerabilities in the context of the automotive ecosystem" (loosely quoting)

  5. The study is very specific and doesn't go after trends in the industry when noting cyber security challenges - this means its very hard to draw lessons for the general industry

 

 

What I liked:

  1. The study paints a clear story of why cybersecurity hasn't kept up. We went from no computers to 50-70 computers really fast.

  2. The study really explained a lot of the different ECU's and how they differ in. a bunch of different cars

  3. The study explores a lot of different facets of car security which aren't just the automatic go to's when you think of a car ie just onstar

  4. The study makes sure to consider actors such as "car tuners" who might not be malicous actors but want more custom control of their car

  5. The study explored experiments on the car in 3 different settings ie (Bench, Stationary, and on the Road)

 

 

Points for Discussion:

  1. How easy it is to infiltrate ECUs as a regular user?

  2. Are modular cars a thing right now? How long until I see one on the road in mass production?

  3. Has regulation in the car industry which led to the introduction of ECU's been a positive or negative in the realm of cybersecurity?

  4. What actions in the short term can be taken to help secure cars against future attacks?

  5. How long until we see cars as a major vector for cyber attacks against people?

 

New Ideas:

  1. How hard would it be to patch ECC bugs with over the network updates

  2. Given the vulnerabilities in cars is it best for some critical systems to stay analog on cars?

  3. Would consolidation of the vehicle software industry improve cybersecurity for automobiles?

  4. Can OnStar be used to leverage new cyber protections for cars - can we learn anything from how they keep their data safe

  5. How does the necessity for cars to be serviced by 3rd parties compromise the vehicle's security?

Comprehensive Experimental Analyses of Automotive Attack Surfaces

Link to the Paper: http://www.autosec.org/pubs/cars-usenixsec2011.pdf

Summary: This paper takes a different view on how cars could be compromised in the modern era. Instead of focusing on internal threats that could arise from physical access to the car, this paper starts to pivot the academic discourse toward more external attacks that could happen wirelessly. The fact this study points out a lot of similarities between models of attacks proves there is a lot of work to be done in this field.

 

What I liked:

  1. The fact the paper stayed away from internal issues with cars meant they were going for a harder study which brought more of a different view

  2. The paper made sure to stick to very practical attacks and stayed away from the very abstract

  3. The study identifies common similarities between different vulnerabilities - which means this paper isn't really narrowly tailored on one product

  4. The paper put realistic limits on the adversary in this paper - it kind of makes me more concerned about physical attacks to be honest but I think they portrayed an accurate attacker

  5. Paper gives a lot of different examples of how a 21st century vehicle could be attacked

 

What I didn't like:

  1. Take away from reading this paper is that the biggest threats to my car come from people having physical access to my car

  2. I've read a lot about how some of the software updates behind vehicles are hampered by the deal relationship in the US - I'd love to read more about that from a cyber POV - which really wasn't touched on in this paper

  3. I don't really consider the short range wireless attacks different from having physical access as they are both proximity based - and that would deter most attackers

  4. The paper focuses on attacks that could gain arbitrary automotive control which I think is kind of a low bar to set for an attack paper

  5. The study doesn't really dive into how integrated auto supply chains are ie one company could be providing the chips for multiple companies who make cars

 

 

Points for Discussion:

  1. The paper mentions the tradeoff in distributed computer systems between efficiency, safety, and cybersecurity - in the context of autos how should this tradeoff be weighed?

  2. Is their a hierarchy in what to defend first in an automobile ie audio systems vs drive train? Does the security approach differ?

  3. By when will we see a rise in these types of auto attacks?

  4. How available are auto manuals and parts online? Is it easy to reverse engineer with all the documentation out there?

  5. How does the development cycle for auto security differ as compared to other tech industries?

 

 New Ideas:

  1. How can we make cars less prevalent to long range attacks

  2. Is there a way to randomize the addresses of every vehicle to prevent targeted attacks of vehicles? Would this deter people from carrying out attacks?

  3. How much time does the average car patch take to deploy compared so say mobile devices? How does this effect safety of vehicles?

  4. Can we add secure hardware to augment vehicle security through the federally mandated OMB ports?

  5. What steps can the auto industry take as a collective to find future vulnerabilities in their autos?

Rethinking Access Control and Authentication for the Home Internet of Things (IoT)

Link to the Paper: https://www.usenix.org/system/files/conference/usenixsecurity18/sec18-he.pdf

Summary:

This paper tries to ascertain how exactly an IOT connected home of the future might look - especially in a world that is no longer gives privileges by device like in the past but instead privileges by capability. IE unless you are a teenager you may not unlock the front door. While this paper does do a good idea of getting a lot of opinions on how people might feel about different roles getting different capabilities, it seems like the paper had some issues in the generalizability of the study.

 

What I Liked:

  1. The fact the study used a 425 participants in order to carry out this survey

  2. The study very early proves how home IOT devices are inherently different than those other security models are tailored for because IOT devices many times are shared

  3. The paper suggests a new paradigm for device security tailored to IOT devices specifically one that is capability based

  4. The paper prepares for insider attacks - which is many time under looked and will play a major role going into the future for people like domestic abuse survivors etc when IOT devices are used in the home

  5. The study very clearly focuses on the general population as a whole as opposed to early adopters to prevent biasing the study

 

What I Didn't Like:

  1. The fact the study was done online, meaning that is very hard to verify these people were real and actually put some thought into their responses

  2. I feel like the study's focus on creating some type of default setting is bad because a lot of these permissions really are based on the family

  3. The study referenced that they evaluated future capabilities that were "likely" to be deployed which really doesn't seem sound

  4. The use of free text responses which become very hard to evaluate and display in a concise matter

  5. The study does conclude that they do have issues with the ecological validity and generalizability of their data given that their surveys were done online.

 

 

 

 

New Ideas:

  1. Can we have one device or hub act as the quarterback for authentication for an entire IOT ecosystem?

  2. Why don't all new security features focus on having the hub authenticate the orders from the human? Would this single point create a point of failure?

  3. Devoting feature controls very specific to a cellphone - and then using that as the identifier for a specific user in a world where there is multiple users in the environment

  4. For children who are still developing will having less access to control the environment around them ie lights or like temperature affect how they grow up?

  5. Repeat the study with real people instead of an online survery

 

 

Discussion:

  1. What obstacles does voice based security need to overcome in order to be implemented in the future?

  2. How do database systems limit capability on a per user basis and enforce those permissions - can those be shared with IOT systems?

  3. Does creating security defaults for permissions open IOT companies to any liabilities in the future?

  4. Should we grant general access to anyone for certain cases where people are in close physical proximity to the device ie lights?

  5. What percentage of households are actually moving toward an IOT type house?