Was doing some exploration online found a couple neat things.
- Spheres of Influence – Drawing lifelines of the city
- The Reality Editor – Networking functional relationships between physical objects
- Clickclickclick.click – Revealing browser events that monitor our online behaviour
- Order+Noise (Interface I) – Negotiating the boundary of randomness
- Virtual Depictions: San Francisco – Cinematic data-driven sculpture
- Possible, Plausible, Potential – Drawings of architecture generated by code
This week I was looking into finding papers to help with my research process specifically exploring dl.acm.org .
One paper that seemed quite compelling and relevant was “Duer: Intelligent Personal Assistant” sadly they did not include the full text only the abstract. As Duer is Baidu’s personal assistant the paper definitively talks about how the assistant can be personal at the user level when being as big as Baidu. Dr. Wang describes three elements of the assistant that help it be “Baidu’s intelligent personal assistant”. First understanding user requirements through explicit utterances, user modals, and rich context. I would be interested to see what defines a rich context given the available information. Secondarily looking at user information and uses across all of Baidu’s products. And thirdly, to keep the interactions as natural as possible allowing for many types of input including text, speech and images.
Another paper looks at the optimization process of mechanical engineers. There biggest challenge seems to be the diverse users base. Users with no knowledge on how to produce an optimization, users with limited knowledge where their tool helped these users perform a bit better, and power users that wanted more information to help them create an even better optimization. This type of user structure reminds me a lot of the data visualization work I did when I was at Consumer Reports. As I was tasked with trying to help all types of user decide if a certain car was for them; Some users, mostly those would did not know much about cars wanted to know just a yes or no to the purchase, Where other users wanted to know every detail about the car and make the decision by themselves. For my work I think I’ll limit it to one kind of user to eliminate trying to boil the ocean.
|Jan 24||2||Recap + Start Dev on Brainstorming Scenario|
|Jan 31||3||Complete Brainstorming Scenario|
|Feb 7||4||Port Jasper to Slack + Create Second Iteration of Brainstorming Bot|
|Feb 14||5||Deepen and extend the interactions of BrainstormBot (Experiencal)|
|Feb 21||6||Deepen and extend the interactions of BrainstormBot (Dev)|
|Feb 28||7||Work on the Web side / display output of the images|
|Mar 14||Spring Break||Clean Up & Document|
|Mar 21||9||Writeup + reevaluate|
|Apr 4||13||Design Physical Space and the User Interaction for that Setting|
|Apr 4||14||Practice Presentations + Design Space Interactions|
|May 2||15||Final Presentations + Prep Physical showcase for Exhibtion|
|May 5||Exhibition||Install + Document some of the Interactions|
|May 15||…||Final Documentation Due|
As part of my thesis prep from last semester I created a couple scenarios and wrote up a paper about the design process and how I plan to augment it.
Currently I’m working on developing a prototype to test with users. I would like to gain an idea of what scenarios I can plug into other than or instead of the ones I have come up with, whether that be in the brainstorming process, or taking work from low fidelity to high fidelity. I also want to continue to think about how my tool will work in a team setting over individual settings and what assumptions come with that. I see the aspects of a team playing a critical part in my work.
To date I have developed 1 functioning example of a design tool (Jasper), I would like to port this to slack to observe team interactions over individual interactions as the team aspects is far more compelling. By next week I also plan to have a scripted Alexa bot that will help surface images in a brainstorming scenario. As of now the only needs I have will be in development of interacting with the Alexa APIs and other APIs further down the road.
I want to avoid building something like the following 😜
Akin, Omer. Psychology of Architectural Design. London: Pion, 1986. Bush, Vannevar. 1945. “As We May Think.” The Atlantic, July. http://www.theatlantic.com/magazine/archive/1945/07/as-we-may-think/3881/.
Hiroshi Ishii and Brygg Ulmer. ‘Tangible Bits: Towards Seamless Interfaces between People, Bits and Atoms’ in Proceedings of CHI ‘97, March 22-27, 1997 http://gtubicomp2013.pbworks.com/w/file/fetch/65131525/ishii-tangible-bits-chi1997.pdf.
Levin, Golan, David Newbury, and Kyle McDonald. “Terrapattern_alpha.” Terrapattern.com, May 24, 2016. http://www.terrapattern.com/about.
Licklider, J.C.R. 1960. “Man-Computer Symbiosis.” IRE Transactions on Human Factors in Electronics HFE (1): 4–11. http://groups.csail.mit.edu/medg/people/psz/Licklider.html
McDonald, Kyle. “A Return to Machine Learning.” Medium, October 7, 2016. https://medium.com/@kcimc/a-return-to-machine-learning-2de3728558eb.
Scupelli, Peter, and Bruce Hanington. “An Evidence-Based Design Approach for Function, Usability, Emotion, and Pleasure in Studio Redesign.” In Proceedings of DRS 2014: Design’s Big Debates. UMEA, Sweden: Umea Institute of Design, 2014. http://www.drs2014.org/en/presentations/223/
Maybe I’ll try this out since well my thesis is conversational design and vocal based conversation is a thing. Oh also welcome to my thesis blog!