Laura G.'s DGMD Journal

May 11: Milestone Questions

What did you accomplish during today's session? I didn't get to work in class because our chat went over time.

What do you plan to do before next week's session?
- Figure out styling to make it easy to tell what's selected and what's not in the nested lists. (very close)
- Add function to close everything when book is closed,
- Add function to close unselected chapter and pages when one is opened,
- Make max height of list items 3 lines and cut them off with a fade or ellipses. (sort of did this?)
- Add fixed menu bar that appears once you go into the actual text. This will be where the abilities to search and changing text color and size will live. (currently focusing on this)

What can we do to support you? I put in a code snippet request for this but I need help with splicing the titles at 3 lines and putting an ellipses after it - or fading it.

May 7: My project online

1. First prototype made in Axure that shows the experience of going into the first book and then the first chapter to see the text: LINK

2. List that automatically fills in with the nested objects. It uses toggling and has minimal styling: LINK

3. A mockup to see how to make elements show up and others to hide while clicking. I changed this even more so that the red "book" div disappeared when you clicked on its blue "chapter" div, but I accidentally saved over it: LINK

4. Applying the programming used in the example above to my list and adding more styling: LINK

5. Second version of the Axure prototype with a new UI that has easier navigation. It ended up going back to the accordion function that it started off with.

To use it select "TEYL Certificate Textbook" > Chapter 2 > The first text link: LINK

6. Latest coded version of the list, which is not final. I want to add more styling and more importantly, the functionalities to make reading the text easier like making it larger and adding a color to every other line: LINK

MY PRESENTATION

May 4: Slides for Informal Presentation

After getting some much appreciated help, I'm getting closer to achieving an interaction that is logical with such nested content. The latest version is here.

I uploaded everything that I needed for my informal group presentation using Slides, which you can see here.

April 21: Share-out for final presentation

What materials do I still have to create, collect or style? I need to finalize my prototype and get my actual code to work how I want it to.

What do you hope your final demo will look like or do? I want to be able to hook up Google Drive's API to the code I have that organizes everything into nested elements. I want those elements to have the same user experience as my prototype, which has the user drill from book titles > chapters > text.

What materials do you still need to post to Github? I still need to post my actual code with the JS and styles.

April 20: Milestone questions

What I accomplished during last week's session. I was out of the country last week so I met up with Alec and we went through making a mockup that shows only the selected element and its parent while making the rest of the elements go away.

What I plan to do before next week's session. I got back a day before the class so unfortunately I couldn't do much. Right now I'm trying to take the mockup that we built last week and change it because right now everything is showing at once and then you drill down. In order for it to behave like my prototype it would have to show all the titles first, then when that is clicked it would show the selected title and its chapters, then just the selected chapter and text.

What can you do to support me? I'll probably need some guidance in tweaking the mockup. Right now I can make it show only the red "titles" divs and then when one is clicked the other goes away, showing all the blue "chapters" divs instead of only the ones inside of the "title" div I clicked on.

April 6: Milestone questions

What I accomplished during last week's session. I got all of the fake books, chapters and text to list without me having to name them.

What I plan to do before next week's session. I plan to be able to organize the chapters and text into their respective books. After that I want to be able to link everything together so that everything doesn't show up at once.

What can you do to support me? I'll try to get some one-on-one time with one of you to ask for tips on how to do the above. When I get that set up I can move on to styling and adding features like making the text larger, which is important.

March 30: Milestone questions

What I accomplished during last week's session. A better sense of Javascript and building a dictionary for my folder's IDs and being able to match them to the folder names.

What I plan to do before next week's session. Get my second level of child folders to show up! Refine my prototype by adding features to change text colors and make text larger. Also do a deeper dive into designing for the vision impaired.

What can you do to support me? Honestly, I think I need a few sessions of sitting down for a couple of hours with one of you guys and working through getting Google Drive to behave and do what I want. I'm trying to figure things out myself but I keep hitting roadblocks since my Javascript is still very rudimentary and I jumped straight into working with an API, like a dummy.

March 29: Further Research and update

Sites visited for research on making the web friendly for visually impaired people:

Lists and navigation for people with low vision are tough because if they contain a lot of content the larger text means endless scrolling. I tried researching to see if anyone had suggestions and asked my cousin if she had come across a site with a good experience for her and we both drew a blank.

I made several changes to my prototype according to my cousin's comments:

I'm still stuck on getting the second level of folders to show from Google Drive, but last session Shaunalynn helped make a big breakthrough so hopefully we're close.

March 23: Prototype

I made a very basic prototype to have my cousin use to make sure that the UI was intutitive enough to find what she's looking for: http://3vnp5g.axshare.com/landing.html
I gave her the task to try to read a specific chapter in a saved book and she was able to find it with no issue. I just have to make sure to increase the text size significantly so that the menu is comfortable for her to read.

March 22: Task List for Google Drive's API

I've gotten my cousin to test out Scanner Pro and to take note of her habits.
I know that she organizes by book>chapter>"pages" and does multiple pages at a time. She syncs automatically through the Scanner Pro app to Drive and organizes her folders in Scanner Pro.

With Alec and Shaunalynn's help, I've managed to succesfully:

Immediate Next Steps:

March 9: Userflows

March 8: Digital Reader Next Step

After giving my presentation I got a received a lot of great feedback about the direction this project can go in. I thought about it some more and decided my current direction isn't the correct one to solve my cousin's needs.

A new direction that was suggested and that seems very promising is using OCR to digitize what the user wants to read and then taking that and enhancing it to the needs of someone who is low vision.
I spoke extensively to my cousin and she was very excited about using OCR technology. I had her test out a few apps to see if any of them stood out, and she liked a couple.

NEXT STEP: Find a cloud API that stores the OCR'd text, then find out how to take that data and use it.

One of the apps my cousin like was Scanner Pro. It doesn't have the ability to use OCR, but it lets a user upload their photo(s) to Google Drive and have it use their OCR to get the text as an editable and readable file. Google Drive had the most accurate translation out of the four apps I tried, So it seemed like a good place to start.

With a lot of help from Alec I was able to set up the code so that when the API loads it exports a specific file to plain text, so that instead of just listing out file names into a div it has the text inside of the file requested.



STEPS AFTER THAT?
  1. Map the user flow of my app.
  2. Sketch out app functionalities and basic layout.
  3. Find out how to get more than one file's text. How can I make the API list all the files in my folder, then let me open up whatever I choose?

February 24: Personal Project Sketches & Testing

What is the project's mission?
The project's mission is to make a phone into a digital magnifier that is more easily usable and inexpensive than protable magnifiers in the market today. Having the phone act as a magnifier also allows the user to limit the amount of gadgets they have to carry around.

Who is its audience, and what do they value?
My audience is primarily my cousin, who is low-vision. This can also benefit anyone with vision problems or that is older.

How will you know if it is successful?
1. The camera enhancements work well enough to easily maginify text
2. The extra hardware for it doesn't hinder the phone's portability or ease of movement when reading.
3. The accompanying app helps in adding features that improve visibility of words and reading in general.

What is your plan for going about it?
To get the amount of detail and zoom needed, I'll use a macro-lens to achieve better focus on an iPhone and a buffer or simple stand that can move around on a flat surface. An app will connect to the phone's camera and let the user easily read what the phone is placed on. The app will let you add light with the phone's flashlight, zoom in, and hopefully highlight sentences so that the reader can keep their place.


Testing


No Flashlight


1a: Macro lens, no zoom, 1.75 inches from surface.
1b: No lens, no zoom, 1.75 inches from surface.


With Flashlight


2a: Macro lens, no zoom, 1.75 inches from surface.
2b: No lens, no zoom, 1.75 inches from surface.


The closest enlargement I can physically get with macro and flashlight:



Conceptual sketch (click larger image):

February 16: Pomes changes

Unfortunately, while I was working on Pomes I didn't commit anything until I made a bunch of changes because I wasn't sure if they'd be going to my Git or not. Things I've done so far are: - Replaced all images with my images - Changed locations on maps to match where my images where taken - Changed all locations for the poems in the code to match to the cities in my images - Changed all they typefaces

In terms of my research for my personal project, I found a couple of APIs so far that could help em with getting product and menu information: - Amazon Product Advertising API - Menumix

February 15: More Personal Project Brainstorming

Digital Maginifying Reader for phone
After doing a bit of research (not very extensively) my conclusion so far is that iPhones do not have the hardware that's appropriate for plaing on top of an object and it being able to focus. I'm tabling this idea unless I'm able to find a way to achieve this in a way that's feasible.
However, this gave me another spin-off idea...

Idea - Using QR codes on products to take people to low-vision friendly informational pages on phone
- If someone who is low-vision or blind wants to be able to read a menu in a restaurant, all menus can come with a QR code that, when scanned, can take a user to a simple site/page on their phone with easy to read text and text-to-speech functionality so that they don't have to struggle with reading small text or making sense of terrible menu layouts. - This can also be expanded to any on-the-shelf product too. So many of them have incredibly small text that's actually pretty impprtant to read for the buyer, wether it's descriptive or instructional. If they had a barcode/QR code that can be scanned to take the user to a well-designed, easy to read product page with appropriate features I think it would really enhance their IRL shopping experience.

February 10: Brainstorming part deux

(UPDATE after conversation with cousin)
Idea 1 - Phone reader for low-vision people: My cousin uses a digital magnifying device that helps her read easily when she sets it down on a surface with type. This is very useful, but cumbersome because it's roughly the size of a tablet so it's annoying to transport. They're also usually upwards of $400
- Is it possible to make an app that does the same function? Maybe one already exists and I haven't found it.
- It's basically a flashlight and magnifying glass in one.
- There's already an app out there that does the same thing, but either the iPhone construction or camera does not allow for it to be set down on anything so the user has to keep holding it as they read. Could I find a way to incorporate what this does and make it work similarly to a digital maginifying device?

Idea 2 - App for finding vegan and cruelty-free products (not food):
- My cousin wants to be able to easily see if a product (beauty and skin-care) is vegan and cruelty-free.
- It must have a scanner because her low vision causes her to take a longer time having to search by typing something out.
- Most of the apps I found are either only for searching vegan food products or just cruelty-free products.
- If she's in Target/Walgreen's/CVS/Sephora and needs to find something quickly, it would be cool if it would be able to tell her immediately by her location what's vegan in that particular store.

February 7-9: Brainstorming + Pomes

Personal Project brainstorming

My dad: Something involving the space station? News on launches? Sea things? He's a hug fan of marine history and ships.

Both my parents: A history of Cuba. Family history? - Do I have enough info/memorabilia for this? (update: Dad's going to dig up old photos and letters from his Soviet Russia days).

Cousin: She has Macular Degeneration. Maybe I can develop something specifically to help with this condition? A way for her to see important photos in an easier way? Need to do additional research to see if there's any visual methods that can be used to help eyesight.

Pomes personalization

I stuck to staying simple and just trying to make a travel diary with it, even though really I don't travel much. Originally I was trying to use the Instagram API to automatically populate images with the locations, but I got scared off by having to register with Instagram as a developer and give a reason in order to do it. For now I'm just e-mailing photos to myself and replacing them manually. I also had to get an Amazon S3 account in order to host my photos.

Questions: My S3 URL starts with "s3-us-west-2.amazonaws.com" - How can I change it so that it's only "S3.Amazon.com?"

February 4: people card mock-ups for Tyler M.

Tyler facts: Likes live shows, beer, sports (basketball, soccer, baseball). Massachussetts born and raised, brief sabbatical in Vermont, so he's a New England native. Hipster music taste and glasses.

Link to Gist →

1. Concert setting

Tyler seems quiet and introverted at first, but he enjoys going to live shows and he frequents them at least three times a month. These ideas place him in a loud concert setting. The first being a background of a concert with strobe lights traveling the page, and the second is his card jumping up and down to msuic from the speakers.

2. Pouring beer

Tyler loves beer a lot so the background of the entire page with his card can fill with poured beer until he's completely immersed in it.

3. Record Player

Tyler enjoys collecting LPs and has a fairly large selection, so maybe his picture can turn into a record being played.

4. Data loving + Meticulous

Tyler also spends a lot of time working with data, so that can be shown with numbers raining down or his picture becoing pixelated.

He is also a very organized guy who makes lists and pays attention to detail, which is where the idea of a giant magnifying glass came from.