Make an initial pitch for a final project that you would like to pursue for class. This pitch is meant to articulate the high-level aspects of the project. We will do a refined project pitch in a few weeks that will dive into the details of the project.
Final projects must:
Your project must be interactive.
This means your project cannot be a series of blinking lights. It must respond to user actions in some way.
Your project must be wearable.
This means it must be able to affixed to the user when they are mobile.
Your project must be functional.
Course projects are meant to be prototypes or proof of concepts of larger ideas. Projects should be able to demonstrate their intended purpose.
In your pitch you should:
Describe what you want to do
Whether you want to work in a team or as an individual
What aspects of the project you feel confident about and what aspects you feel less confident about
My idea of a new wearable technology would be to have a bracelet or some other similar accessory that would be able to detect illness before it happened. You could simply breathe your germs all over the device in order for it to pick up dangerous germs. As it detected the beginning of the cold, flu, etc, it would alert the wearer and send a health recommendation or prescription (if needed) to the wearer’s email. Ideally it would be able to detect the beginning stages of more serious illnesses- cancer, diabetes, etc. so that the wearer could take action immediately.
The detection of more serious illnesses would be a challenge…any illness that requires drawn blood could become a problem. Perhaps it could act similar to a diabetes blood monitoring system?
The idea for my project is to create shoes for the blind that have an array of proximity sensors that can detect obstacles in front of them. The sensors would connect to vibrating pads in the wearers shoes. When obstacles appear, but pads would give the wearer a general sense of what lies in front of them. Challenges would include calibrating the proximity sensors and vibrators to actually make the device useful, as well as making a product slim and durable enough that it could be worn daily.
I think this project should be done because they look really beautiful and interesting but there are a lot of challenges that I see for example put sensors to the fingure nails and eye lashes. Also, put LEDs to the skin
A popular and attractive application of wearable computing are garments with programmable visuals, realized using display technology such as LEDs. Programming such visuals is currently done at a low level, by writing microcontroller programs that control the activation of individual display elements. This approach lacks scalability – programming becomes progressively more unwieldy as the number of display elements increases. It is also ill-suited for implementing complex visual patterns that activate sychronously across multiple garments – when each garment has its own, separately programmed microcontroller, synchronizing visuals on multiple garments is a challenge. Finally, the approach is not particularly intuitive, as there is a disconnect between how display elements are accessed during programming (using pin numbers) and their logical groupings and physical locations on the garment.
I propose adapting some of the tools and workflows traditionally employed in computer animation to the problem of programming wearable visuals. I envision a system that allows authoring of animated visuals on multiple garments in a single place – a graphical animation tool on your PC. The lighting of many display elements across one or more garments could be programmed visually and using a much smaller set of logical parameters. For instance, imagine a shirt with an array of LEDs on the front and the designer wants to light them up in a pulsating heart shape. The designer could define a single animation parameter called “ShirtHeartShape” that maps to the pins of all those LEDs, keyframe its value at different times, and use interpolating curves to achieve a nice, pulsating effect. The designer could also author arbitrarily complex visuals across multiple garments by synchronously animating multiple such parameters. Most of the complexity, both in terms of algorithms and creative effort, would happen in the animation tool. There would be no need to program microcontrollers on individual garments – their only task would be to receive time-indexed animation frames (pin values) from the animation tool and realize the visuals by applying the frame values to appropriate pins.
Controlling display elements on a garment using hand-authored animation curves.
As a further extension, I also propose an intuitive method for selecting which display elements should be controlled by a particular animation parameter. The idea is to scan the garment using a camera in order to obtain its image, and perform automatic registration to determine the pin numbers and physical locations of all display elements. The designer could then define the parameter mapping by using a sketch-based method (an equivalent of Photoshop lasso tool) to select a subset of LEDs directy on the image of the garment, and the system would automatically extract their pin numbers and compute the mapping.
Selecting display elements to be controlled by a single parameter using an intuitive lasso-like tool.
My idea is of a Security fit band. It will be a small device connect via Bluetooth to a smartphone app. The device will have a little button, when the button is pressed, the fit band will send a signal to the smartphone , sending a e-mail/text message with the user location to a pre-registered list of people informing that the user is in some kind of dangerous situation.
The challenges that we might face in this project would be:
Battery of the fit band (need to be small enough and all the time connect to the phone via Bluetooth)
Develop a mobile application and deal with apple restrictions ( After some research I discover that you can’t send a e-mail without user authorization using apple API, The developers will have to figure out a way around it).
This is a project that I’ve been thinking for a while, and its kind of expensive but can be really useful in cold days. It is a multimedia jacket to basically control your smartphone through Bluetooth and Arduino.
This coat would have headphones, a microphone, an accelerometer, a gyroscope, and a LCD that would be used to pass some basic information to the user. In addition to those features you could use discrete solar panels on the shoulders to recharge the batteries of your Arduino and even track some basic accidents measuring pressure in key regions of your jacket.
The challenges that people might face in this project would be:
– Solar panels are expensive.
– How to determine that the user is in danger.
– How to efficiently use the solar panels to recharge the batteries.
I was thinking about this project during the last two weeks. It’s about a glove that can be used to play computer games that requires the user to press a limited number of buttons, like Q,W,E or R. It could be adjusted to work with another combination of buttons, in fact. Basically, you would not need a keyboard to play and could use any surface to press the buttons.
This project could be famous and pretty useful among MOBA players like League of Legends and Defense Of The Ancients (DOTA). With these special gloves you can play using a TV as a monitor and a table as keyboard, for example. As a challenge I think that would be a little hard to configure the Bluetooth plate and create an interface to computers that interact with Lilypad and work properly.
This would be a device for those who are physically paralyzed from the neck down or do not have motor functions in the arms and wrist. Essentially, this would be a hat that can sense tilt of a person’s head using an accelerometer and send commands to a computer via USB to control a cursor similar to a joystick. To click, the user bites down on a force sensor located in the person’s mouth. The accommodating circuitry would be sewn to a hat of the user’s choice.
Some challenges one may encounter when pursuing this project include the accuracy of movements, calibration, and sensitivity to small head movements.