Jan 8, 2025 - 9:10am
Future improvements for the system include adding a foot pedal as a USB input device to reduce reliance on the right hand, which is currently unergonomic. The user finds it challenging to perform logical thinking and code-related tasks without using the right hand, possibly due to underutilization of a part of the brain. Other possibilities involve making the system more agential, such as allowing commands to be sent, observed, and modified in real-time. Visual interactions, like popping up and closing windows based on gaze detection, are also intriguing, especially if using Moondream for gaze tracking. The user is eager to test these enhancements with both hands.
Jan 8, 2025 - 9:06am
The user discusses their experience with Handy, a custom-built external keyboard and program, which helps them use the computer despite a broken finger on their right hand. While Handy has been useful, particularly with typing and voice commands in VS Code, the user faces challenges from not being able to write with their right hand and experiencing bodily discomfort from overuse of their left hand. These physical issues are not addressed by the program.
Jan 5, 2025 - 6:04pm
The potential of using a keyboard shortcut to interact with an AI pipeline is intriguing.
Jan 5, 2025 - 2:37pm
The user is exploring the idea of a program that can take snippets of context, such as code from different files in VS Code, and run inference on each snippet separately while keeping them all in a single context window. The goal is to have more control over what is included in the context window, rather than relying on the program to determine it.
Jan 5, 2025 - 11:06am
Using only the left hand has revealed how much the right hand is involved in thinking processes. Its unavailability makes articulating thoughts and doing logic more challenging, leading to quicker feelings of being overwhelmed.
Jan 4, 2025 - 4:43pm
To keep Andromeda.computer running smoothly, it needs to be as serverless as possible to avoid a single machine failure affecting multiple applications. This leads to considering distributed SQLite or similar solutions for redundancy. The goal is to ensure smooth operation and have application code in containers for better manageability.
Jan 4, 2025 - 2:11pm
The author plans to create a meta projects page to organize and categorize audio notes into four main projects: Handy (for single-handed computer use), project tracking, a zine creation tool, and Burrito, which serves as the overarching framework for all the projects. Coffee is aiding the progress. Tomorrow, the focus is on advancing the zine project to a usable state, including adding text, images, and possibly integrating the Glyph API. The author also envisions adding multi-page functionality and allowing users to create and link pages, creating a nested context system. The ultimate goal is to provide a URL for others to explore and use the zine tool.
Jan 4, 2025 - 2:10pm
The text discusses the concept of building context within a project or a system, starting from a blank state. Two methods are mentioned for establishing this context: importing existing elements from global or other pre-defined contexts, and directly dragging and dropping files. The purpose is to organize information into meaningful units that can be easily manipulated. Additionally, it highlights the importance of making data sources explorable, such as tables and SQLite databases, which is a separate but equally crucial aspect.
Jan 4, 2025 - 2:06pm
The author is contemplating a project that involves collecting artifacts and placing them into a hypermedia context on a web page, allowing for complex interactions and transformations of the data. This project is envisioned to have a similar interface to a zine but with added complexity, akin to an application called Burrito. The goal is to enable querying, selection, and relationship-building between pieces of information, with the potential for both individual and combined transformations. The author is particularly interested in manipulating data through various operations and sees potential for integrating this interface into AR or VR environments due to their visual and spatial preferences. The core concept is a "canvas methodology" that can be applied across different digital creations, differing from the simpler drag-and-drop zine format. The author also considers the possibility of using large language models to build the application autonomously by providing detailed, context-rich information over time.