Tech
Brain Computer Interfaces Are Changing the Rules for the Future of Browsing
Brain Computer Interfaces Are Changing the Rules for the Future of Browsing
The typical internet user in 2022 logged on for about seven hours each day. We use the internet on a regular basis, and navigating between tabs and apps is a big part of that.
What would happen if you couldn’t use your device? Just consider how much your daily life would be affected by that. This is the reality that millions of people worldwide must deal with. The answer to this issue? Computer-brain interfaces.
READ Convert Any TV To A Smart TV With This Guide
Inspired by this article by Anush Mutyala, I started working on a possible remedy for people with movement impairments or other issues preventing them from easily interacting with the internet. The purpose of this device is to give users the ability to switch from tab to tab without moving at all!
Background
BCI stands for brain computer interface and refers to technology that allows people to connect directly to an external device. BCIs acquire brain signals and translate them into commands that a computer can understand. The computer can then perform the desired action. If you’re looking for a more thorough explanation of BCIs check out the “What are BCIs??” section in my other article.
For this project, I used electroencephalography. Commonly referred to as EEG. EEG is a non-invasive technique used to record the electrical activity of the brain. EEG uses electrodes placed on the scalp to measure brainwaves.
Brainwaves are often divided into five groups, classified by frequency: delta waves, theta waves, alpha waves, beta waves, and gamma waves. Brainwaves with lower frequency, such as delta, theta or alpha, are often found when relaxed or sleeping. Brainwaves with higher frequency, such as beta or gamma, are often found when you’re thinking.
However, it is important to note that multiple brainwaves are present at the same time.
Using an EEG headband, I was able to code a program enabling users to switch between tabs by blinking. When I blink, the sensors on my headband will pick up a fluctuation in the frequency of my brainwaves. The code will then interpret this data as an eyeblink and switch tabs. I chose eyeblinks because they create a huge fluctuation in EEG signals. You may be wondering why such a small motion affects your brainwaves so severely.
It’s not for the reasons you may think. Blinking actually interferes with the EEG data. It creates a disruption in the stream of brainwaves being sent to my computer and thus explains the spikes I see in my brainwaves.
What I Used
To record my brainwaves, I used the Muse 2 EEG headband.
I downloaded the BlueMuse app on my computer to stream data directly from the Muse device.
I made this project using Python’s Integrated Development and Learning Environment (IDLE).
I started with code from this article and made it my own, with lots of help from GitHub and ChatGPT.
Step-By-Step
First, I set up the BlueMuse stream. You can download BlueMuse here and follow the instructions on GitHub to download and start streaming. Next, I imported all the libraries I was going to use in my code. Ensure you have all of these libraries downloaded, or the code won’t work.
import pyautogui
from time import sleep
from pynput.keyboard import Key, Controller
from os import system as sys
from datetime import datetime
import numpy as np
from pylsl import StreamInlet, resolve_byprop, resolve_stream
import utils
from brainflow.data_filter import DataFilter, FilterTypes, NoiseTypes
from scipy import signal
The next part of the code looks like this:
streams = resolve_byprop(‘type’, ‘EEG’, timeout=2)
if len(streams) == 0:
raise RuntimeError(‘Can’t find EEG stream.’)
inlet = StreamInlet(streams[0], max_chunklen=12)
This looks for an EEG data stream using the Lab Streaming Layer (LSL) library. The result is then stored in the ‘streams’ variable. The code knows if there is an available EEG stream by checking the length of the ‘stream’ variable. If there’s nothing in the variable (no current streams), it presents an error.
BUFFER_LENGTH = 2
EPOCH_LENGTH = 0.5
OVERLAP_LENGTH = 0.2
SHIFT_LENGTH = EPOCH_LENGTH – OVERLAP_LENGTH
Buffer length, epoch length, overlap length, and shift length are used to determine the length of data segments and the rate at which the data is collected. These can be toggled for preference.INDEX_CHANNEL_LEFT = [1]
INDEX_CHANNEL_RIGHT = [2]
INDEX_CHANNELS = [INDEX_CHANNEL_LEFT, INDEX_CHANNEL_RIGHT]
info = inlet.info()
fs = int(info.nominal_srate())
print(fs)This combines channels from the left and right sides of the brain into a list, determines the sampling rate, and finally, prints the data. The next part of the code is the “vectorize” function:
def vectorize(df, fs, filtering=False):
index = len(df)
feature_vectors = []
if filtering == True:
DataFilter.perform_bandpass(df[:], fs, 18.0, 22.0, 4,FilterTypes.BESSEL.value, 0)
DataFilter.remove_environmental_noise(df[:], fs, NoiseTypes.SIXTY.value)
for y in range(0,index,fs):
f, Pxx_den = signal.welch(df[y:y+fs], fs=fs, nfft=256)
# Delta 1–4
ind_delta, = np.where(f < 4)
meanDelta = np.mean(Pxx_den[ind_delta], axis=0)
# Theta 4–8
ind_theta, = np.where((f >= 4) & (f <= 8))
meanTheta = np.mean(Pxx_den[ind_theta], axis=0)
# Alpha 8–12
ind_alpha, = np.where((f >= 8) & (f <= 12))
meanAlpha = np.mean(Pxx_den[ind_alpha], axis=0)
# Beta 12–30
ind_beta, = np.where((f >= 12) & (f < 30))
meanBeta = np.mean(Pxx_den[ind_beta], axis=0)
# Gamma 30–100+
ind_Gamma, = np.where((f >= 30) & (f < 40))
meanGamma = np.mean(Pxx_den[ind_Gamma], axis=0)
feature_vectors.insert(y, [meanDelta, meanTheta, meanAlpha, meanBeta, meanGamma])
powers = np.log10(np.asarray(feature_vectors))
powers = powers.reshape(5)
return powers
In “vectorize,” filtering can be toggled on and off with preference. When filtering is on, the code processes the data to remove noise. Filtering is set to False for this project. Next, the function separates the data into smaller chunks and calculates the average power of the brainwaves over the time of each segment. This helps it categorize the data into different frequency bands (delta, theta, alpha, beta or gamma).
NUMBER_OF_CYCLES = 150 #choose how many times this should run
data_holder_right = np.zeros(NUMBER_OF_CYCLES+1)
data_holder_left = np.zeros(NUMBER_OF_CYCLES+1)
i=0
while (i<NUMBER_OF_CYCLES):
i = i +1
eeg_buffer = np.zeros((int(fs * BUFFER_LENGTH), 1))
n_win_test = int(np.floor((BUFFER_LENGTH – EPOCH_LENGTH) /
SHIFT_LENGTH + 1))
band_buffer = np.zeros((n_win_test, 5))
buffers = [[eeg_buffer, eeg_buffer], [band_buffer, band_buffer]]
This section of the code begins by defining the number of cycles the program will run. You can adjust this number to control how long the program runs or remove it entirely if you want it to run indefinitely. The ‘eeg_buffer’ and ‘band_buffer’ variables are like empty containers that can be used for storing data. Additionally, there is another variable called ‘BUFFER_LENGTH’ that determines how many epochs (segments of EEG data) fit within a specified time window. By adjusting this value, you can control how frequently data is retrieved from the EEG stream. Finally, we set up the ‘buffers’ list to hold the data that will be processed later in the program.
for index in [0, 1]:
eeg_data, timestamp = inlet.pull_chunk(timeout=1, max_samples=int(SHIFT_LENGTH * fs))
ch_data = np.array(eeg_data)[:, INDEX_CHANNELS[index]]
buffers[0][index] = utils.update_buffer(buffers[0][index], ch_data)
“”” 3.2 COMPUTE BAND POWERS “””
data_epoch = utils.get_last_data(buffers[0][int(index)][0],int(EPOCH_LENGTH * fs))
band_powers = vectorize(data_epoch.reshape(-1), fs, filtering=True)
buffers[1][index] = utils.update_buffer(buffers[1][index], np.array([band_powers]))
This section of code processes data from two channels: Left and right. (I was initially planning to have different outcomes for left and right blinks, however, I later had to change course because I deemed it too difficult.) It then retrieves EEG data from the stream with an associated timestamp. The collected data is stored for later use. Within the previously mentioned epochs, the code uses the “vectorize” function to calculate brainwave frequencies. Let’s take a look at the final part of my code:
# Bands will be given number indexes in the following order:
# Delta, theta, alpha, beta, gamma.
# Delta would therefor be 0
Band_sel = 0
print(buffers[1][1][0][-1],’ ‘,buffers[1][0][0][-1])
print(buffers[1][1][0][-1][Band_sel])
data_holder_right[i] = buffers[1][1][0][-1][Band_sel]
data_holder_left[i] = buffers[1][0][0][-1][Band_sel]
if buffers[1][1][0][-1][Band_sel] < -1 and buffers[1][0][0][-1][Band_sel] < -1:
print(“””
tab
“””)
pyautogui.hotkey(‘ctrl’, ‘tab’)
buffers[1][1][0][-1][Band_sel] = 0
buffers[1][0][0][-1][Band_sel] = 0
## elif buffers[1][0][0][-1][Band_sel] < -1.2:
## print(“””
## left
## “””)
## #pyautogui.hotkey(‘ctrl’, ‘shift’, ‘tab’)
## buffers[1][0][0][-1][Band_sel] = 0
In the first line, wet set Band_sel to 0; we’re focusing on the delta band. The program prints out the data values so the user can see their brainwave data in real-time before it stores this data. To execute the final action, checks to see if the users brainwaves have reached a certain threshold. If this threshold is met, the code comes to the conclusion that they have blinked, and uses the pyautogui library to perform the keybinds to switch between tabs on a browser. You may notice the last section of the code is commented out. This is the code that would have been used to create a different outcome for a left blink. Unfortunately, it wasn’t working so I decided to remove it from the program.
Credit: DALL-E 2
My Thoughts
I learned so much from this project. Not only did I learn about coding with Python and processing EEG brainwave data, but I also gained valuable insights into the realities of tackling ambitious projects. I’ve never built a project quite like this, and I definitely underestimated how much time and effort I would put into it.
ALSO How to Enhance Fortnite Performance
I thought that I would just copy and paste the code I found online and be done with it. Unfortunately, it didn’t just work out of the box. I spent hours debugging, and even when I finished fixing all the errors, it still didn’t work! My plan was to make a code that would switch tabs to the left when I winked my left eye and switch tabs to the right when I winked my right eye (that’s what I saw online).
But, I found no feasible way that the code could tell the difference between a left and right blink. I’ll admit even after changing my plan to make it more achievable, the code I presented in this article isn’t foolproof. In fact, I’d estimate that it only works around 60% of the time.
Sometimes, the tab would change right after I blinked, as planned, but other times, it would change on its own! It was hard to nail down the perfect threshold to detect eyeblinks in EEG data.
Knowing what I know now, I’m prepared to take on my next project with more knowledge, preparedness and determination!
Tips
BCIs can be expensive and often inaccessible for people, especially youth, who want to build projects like this. The Muse 2 is one of the cheapest EEG headbands I could find and is still a couple hundred dollars. I bought mine secondhand on Facebook Marketplace for a great price, and I suggest everyone try to buy secondhand before purchasing anything new.
AI tools can be a great help in coding, especially for beginners. I especially like to use ChatGPT to explain my code. Often, I had to copy code from a GitHub page without really knowing what it did. Getting ChatGPT to explain how things work has been incredibly helpful for this journey. Generative AI can also be a great tool for debugging and writing code.
Before embarking on a journey like mine, understand the limitations of current BCI technology, especially cheap, non-invasive options like the Muse. It’s important to note that EEG headbands like the one I used can’t do everything, so do your research!
Takeaways
Millions of people globally face difficulties interacting with the internet due to movement impairments and other issues; BCIs could be a revolutionary solution.
BCIs are devices that allow direct connections between the brain and external devices by translating brain signal data into commands for a computer.
This project uses EEG, a non-invasive technique to record brain electrical activity using electrodes on the scalp.
Brainwaves are classified into the following frequency bands: Delta, theta, alpha, beta, and gamma.
This project uses an EEG headband to switch tabs by detecting eyeblinks, which cause fluctuations in brainwave frequencies.
I faced many challenges in this project trying to reliably detect eyeblinks.
If you’re planning to do a project like this, I suggest buying your technology secondhand, understanding the limitations of your technology and be prepared for anything!
Tech
Google Pixel Buds Are Now Just $69
Google Pixel Buds Are Now Just $69
The reasonably priced Pixel Buds A-Series Google earbuds are available for purchase for Android users.
READ Google Pixel Devices Facing Storage Issues After Update
While you’re on the phone, the buds can cut down on background noise, and the sound quality is pretty damn good. According to Google, you may use the earbuds for up to five hours of listening time and 2.5 hours of talking before having to put them back in their case.
It looks like you can listen for up to 24 hours straight before charging the case. After just 15 minutes of charging, you may extend the listening time by three hours with the use of rapid charging.
There isn’t actual active noise cancellation present, but there is an adaptive sound feature that lets you set the volume automatically.
Tech
My Everyday Tech Essentials 2024 (EDC)
My Everyday Tech Essentials 2024 (EDC)
As a young and beginner tech content creator, i always want to keep my followers and friends updated on how i shoot and do my stuff online.
ALSO oraimo Discount Code
In this video, i made reviews on the various tech gadgets i carry along anytime i go out to create content.
From my iPhone to my favorite oraimo BoomPop2 and many others,
Watch Video below:
Tech
Gemini To Get Assistant Routines
Gemini To Get Assistant Routines
Assistant Routines may soon be supported on Gemini, according to an APK dissection of the Google app.
One of the main reasons so many are using Google Assistant rather than signing up for Gemini is that it lacks routines.
Regretfully, neither the precise integration of Routines into Gemini nor the possible release date are known
ALSO Unlock Google’s Reverse Image Search Potential
Google first launched Google Assistant Routines back in 2017. You can use this functionality to perform several actions with a single voice command. Saying “Hey Google, let’s watch a movie,” for instance, would cause Assistant to simultaneously turn out the lights, switch on the TV, and put your phone in do not disturb mode.
Even while Google is pitching Gemini, its generative AI-powered assistant, as a replacement for Google Assistant, Gemini still lacks Routines-like functionality. Fortunately, that might not last long.
Based on work-in-progress code, an APK breakdown assists in forecasting features that might be added to a service in the future. It’s possible, though, that these anticipated features won’t be released to the general public.
READ Google Bard : Create Social Media Content With AI
The Google app for Android is now in beta version 15.24.28.29.arm64 beta. We discovered a work-in-progress page that makes explicit reference to Assistant Routines and how Gemini would support them. This page is viewable in the screenshot below, which we must stress is a work in progress:
Based on the information available on this website, it seems that Gemini will not be getting its own Routines system—at least not quite yet. Rather, it seems that Gemini will allow you to manage Assistant Routines. It will be awkward because you will have to use Assistant to create new routines and Gemini to activate them, but at least it’s an improvement over nothing.
While there are many additional features that Assistant can perform that Gemini cannot, one of the main drawbacks that keep Android users from fully committing to Gemini is likely the inability to control routines. If nothing else, this APK disassembly demonstrates Google’s ongoing efforts to bring Gemini’s features up to line with Assistant.
Regretfully, we are unsure of the timeline for when this functionality will be activated. However, given that it’s showing up in beta code, we anticipate it to be available in a few weeks or months.