Brain Computer Interfaces Are Changing the Rules for the Future of Browsing
The typical internet user in 2022 logged on for about seven hours each day. We use the internet on a regular basis, and navigating between tabs and apps is a big part of that.
What would happen if you couldn’t use your device? Just consider how much your daily life would be affected by that. This is the reality that millions of people worldwide must deal with. The answer to this issue? Computer-brain interfaces.
Inspired by this article by Anush Mutyala, I started working on a possible remedy for people with movement impairments or other issues preventing them from easily interacting with the internet. The purpose of this device is to give users the ability to switch from tab to tab without moving at all!
BCI stands for brain computer interface and refers to technology that allows people to connect directly to an external device. BCIs acquire brain signals and translate them into commands that a computer can understand. The computer can then perform the desired action. If you’re looking for a more thorough explanation of BCIs check out the “What are BCIs??” section in my other article.
For this project, I used electroencephalography. Commonly referred to as EEG. EEG is a non-invasive technique used to record the electrical activity of the brain. EEG uses electrodes placed on the scalp to measure brainwaves.
Brainwaves are often divided into five groups, classified by frequency: delta waves, theta waves, alpha waves, beta waves, and gamma waves. Brainwaves with lower frequency, such as delta, theta or alpha, are often found when relaxed or sleeping. Brainwaves with higher frequency, such as beta or gamma, are often found when you’re thinking.
However, it is important to note that multiple brainwaves are present at the same time.
Using an EEG headband, I was able to code a program enabling users to switch between tabs by blinking. When I blink, the sensors on my headband will pick up a fluctuation in the frequency of my brainwaves. The code will then interpret this data as an eyeblink and switch tabs. I chose eyeblinks because they create a huge fluctuation in EEG signals. You may be wondering why such a small motion affects your brainwaves so severely.
It’s not for the reasons you may think. Blinking actually interferes with the EEG data. It creates a disruption in the stream of brainwaves being sent to my computer and thus explains the spikes I see in my brainwaves.
What I Used
To record my brainwaves, I used the Muse 2 EEG headband.
I downloaded the BlueMuse app on my computer to stream data directly from the Muse device.
I made this project using Python’s Integrated Development and Learning Environment (IDLE).
I started with code from this article and made it my own, with lots of help from GitHub and ChatGPT.
First, I set up the BlueMuse stream. You can download BlueMuse here and follow the instructions on GitHub to download and start streaming. Next, I imported all the libraries I was going to use in my code. Ensure you have all of these libraries downloaded, or the code won’t work.
from time import sleep
from pynput.keyboard import Key, Controller
from os import system as sys
from datetime import datetime
import numpy as np
from pylsl import StreamInlet, resolve_byprop, resolve_stream
from brainflow.data_filter import DataFilter, FilterTypes, NoiseTypes
from scipy import signal
The next part of the code looks like this:
streams = resolve_byprop(‘type’, ‘EEG’, timeout=2)
if len(streams) == 0:
raise RuntimeError(‘Can’t find EEG stream.’)
inlet = StreamInlet(streams, max_chunklen=12)
This looks for an EEG data stream using the Lab Streaming Layer (LSL) library. The result is then stored in the ‘streams’ variable. The code knows if there is an available EEG stream by checking the length of the ‘stream’ variable. If there’s nothing in the variable (no current streams), it presents an error.
BUFFER_LENGTH = 2
EPOCH_LENGTH = 0.5
OVERLAP_LENGTH = 0.2
SHIFT_LENGTH = EPOCH_LENGTH – OVERLAP_LENGTH
Buffer length, epoch length, overlap length, and shift length are used to determine the length of data segments and the rate at which the data is collected. These can be toggled for preference.
INDEX_CHANNEL_LEFT = 
INDEX_CHANNEL_RIGHT = 
INDEX_CHANNELS = [INDEX_CHANNEL_LEFT, INDEX_CHANNEL_RIGHT]
info = inlet.info()
fs = int(info.nominal_srate())
This combines channels from the left and right sides of the brain into a list, determines the sampling rate, and finally, prints the data. The next part of the code is the “vectorize” function:
def vectorize(df, fs, filtering=False):
index = len(df)
feature_vectors = 
if filtering == True:
DataFilter.perform_bandpass(df[:], fs, 18.0, 22.0, 4,FilterTypes.BESSEL.value, 0)
DataFilter.remove_environmental_noise(df[:], fs, NoiseTypes.SIXTY.value)
for y in range(0,index,fs):
f, Pxx_den = signal.welch(df[y:y+fs], fs=fs, nfft=256)
# Delta 1–4
ind_delta, = np.where(f < 4)
meanDelta = np.mean(Pxx_den[ind_delta], axis=0)
# Theta 4–8
ind_theta, = np.where((f >= 4) & (f <= 8))
meanTheta = np.mean(Pxx_den[ind_theta], axis=0)
# Alpha 8–12
ind_alpha, = np.where((f >= 8) & (f <= 12))
meanAlpha = np.mean(Pxx_den[ind_alpha], axis=0)
# Beta 12–30
ind_beta, = np.where((f >= 12) & (f < 30))
meanBeta = np.mean(Pxx_den[ind_beta], axis=0)
# Gamma 30–100+
ind_Gamma, = np.where((f >= 30) & (f < 40))
meanGamma = np.mean(Pxx_den[ind_Gamma], axis=0)
feature_vectors.insert(y, [meanDelta, meanTheta, meanAlpha, meanBeta, meanGamma])
powers = np.log10(np.asarray(feature_vectors))
powers = powers.reshape(5)
In “vectorize,” filtering can be toggled on and off with preference. When filtering is on, the code processes the data to remove noise. Filtering is set to False for this project. Next, the function separates the data into smaller chunks and calculates the average power of the brainwaves over the time of each segment. This helps it categorize the data into different frequency bands (delta, theta, alpha, beta or gamma).
NUMBER_OF_CYCLES = 150 #choose how many times this should run
data_holder_right = np.zeros(NUMBER_OF_CYCLES+1)
data_holder_left = np.zeros(NUMBER_OF_CYCLES+1)
i = i +1
eeg_buffer = np.zeros((int(fs * BUFFER_LENGTH), 1))
n_win_test = int(np.floor((BUFFER_LENGTH – EPOCH_LENGTH) /
SHIFT_LENGTH + 1))
band_buffer = np.zeros((n_win_test, 5))
buffers = [[eeg_buffer, eeg_buffer], [band_buffer, band_buffer]]
This section of the code begins by defining the number of cycles the program will run. You can adjust this number to control how long the program runs or remove it entirely if you want it to run indefinitely. The ‘eeg_buffer’ and ‘band_buffer’ variables are like empty containers that can be used for storing data. Additionally, there is another variable called ‘BUFFER_LENGTH’ that determines how many epochs (segments of EEG data) fit within a specified time window. By adjusting this value, you can control how frequently data is retrieved from the EEG stream. Finally, we set up the ‘buffers’ list to hold the data that will be processed later in the program.
for index in [0, 1]:
eeg_data, timestamp = inlet.pull_chunk(timeout=1, max_samples=int(SHIFT_LENGTH * fs))
ch_data = np.array(eeg_data)[:, INDEX_CHANNELS[index]]
buffers[index] = utils.update_buffer(buffers[index], ch_data)
“”” 3.2 COMPUTE BAND POWERS “””
data_epoch = utils.get_last_data(buffers[int(index)],int(EPOCH_LENGTH * fs))
band_powers = vectorize(data_epoch.reshape(-1), fs, filtering=True)
buffers[index] = utils.update_buffer(buffers[index], np.array([band_powers]))
This section of code processes data from two channels: Left and right. (I was initially planning to have different outcomes for left and right blinks, however, I later had to change course because I deemed it too difficult.) It then retrieves EEG data from the stream with an associated timestamp. The collected data is stored for later use. Within the previously mentioned epochs, the code uses the “vectorize” function to calculate brainwave frequencies. Let’s take a look at the final part of my code:
# Bands will be given number indexes in the following order:
# Delta, theta, alpha, beta, gamma.
# Delta would therefor be 0
Band_sel = 0
data_holder_right[i] = buffers[-1][Band_sel]
data_holder_left[i] = buffers[-1][Band_sel]
if buffers[-1][Band_sel] < -1 and buffers[-1][Band_sel] < -1:
buffers[-1][Band_sel] = 0
buffers[-1][Band_sel] = 0
## elif buffers[-1][Band_sel] < -1.2:
## #pyautogui.hotkey(‘ctrl’, ‘shift’, ‘tab’)
## buffers[-1][Band_sel] = 0
In the first line, wet set Band_sel to 0; we’re focusing on the delta band. The program prints out the data values so the user can see their brainwave data in real-time before it stores this data. To execute the final action, checks to see if the users brainwaves have reached a certain threshold. If this threshold is met, the code comes to the conclusion that they have blinked, and uses the pyautogui library to perform the keybinds to switch between tabs on a browser. You may notice the last section of the code is commented out. This is the code that would have been used to create a different outcome for a left blink. Unfortunately, it wasn’t working so I decided to remove it from the program.
Credit: DALL-E 2
I learned so much from this project. Not only did I learn about coding with Python and processing EEG brainwave data, but I also gained valuable insights into the realities of tackling ambitious projects. I’ve never built a project quite like this, and I definitely underestimated how much time and effort I would put into it.
I thought that I would just copy and paste the code I found online and be done with it. Unfortunately, it didn’t just work out of the box. I spent hours debugging, and even when I finished fixing all the errors, it still didn’t work! My plan was to make a code that would switch tabs to the left when I winked my left eye and switch tabs to the right when I winked my right eye (that’s what I saw online).
But, I found no feasible way that the code could tell the difference between a left and right blink. I’ll admit even after changing my plan to make it more achievable, the code I presented in this article isn’t foolproof. In fact, I’d estimate that it only works around 60% of the time.
Sometimes, the tab would change right after I blinked, as planned, but other times, it would change on its own! It was hard to nail down the perfect threshold to detect eyeblinks in EEG data.
Knowing what I know now, I’m prepared to take on my next project with more knowledge, preparedness and determination!
BCIs can be expensive and often inaccessible for people, especially youth, who want to build projects like this. The Muse 2 is one of the cheapest EEG headbands I could find and is still a couple hundred dollars. I bought mine secondhand on Facebook Marketplace for a great price, and I suggest everyone try to buy secondhand before purchasing anything new.
AI tools can be a great help in coding, especially for beginners. I especially like to use ChatGPT to explain my code. Often, I had to copy code from a GitHub page without really knowing what it did. Getting ChatGPT to explain how things work has been incredibly helpful for this journey. Generative AI can also be a great tool for debugging and writing code.
Before embarking on a journey like mine, understand the limitations of current BCI technology, especially cheap, non-invasive options like the Muse. It’s important to note that EEG headbands like the one I used can’t do everything, so do your research!
Millions of people globally face difficulties interacting with the internet due to movement impairments and other issues; BCIs could be a revolutionary solution.
BCIs are devices that allow direct connections between the brain and external devices by translating brain signal data into commands for a computer.
This project uses EEG, a non-invasive technique to record brain electrical activity using electrodes on the scalp.
Brainwaves are classified into the following frequency bands: Delta, theta, alpha, beta, and gamma.
This project uses an EEG headband to switch tabs by detecting eyeblinks, which cause fluctuations in brainwave frequencies.
I faced many challenges in this project trying to reliably detect eyeblinks.
If you’re planning to do a project like this, I suggest buying your technology secondhand, understanding the limitations of your technology and be prepared for anything!
Best PS5 Games To Play Now
Best PS5 Games To Play Now
The PlayStation 5 is a shining example of cutting-edge technology as the gaming industry develops, offering immersive experiences that push the limits of interactive entertainment. If you’re a happy PS5 owner, you surely have an insatiable appetite for the greatest games out there.
This is a carefully selected list of the games you should not miss in [current year] to elevate your gaming experiences.
“Demon’s Souls” is an amazing reimagining of the iconic action role-playing game that popularized a genre. With its stunning graphics, difficult gameplay, and eerie atmosphere, this game perfectly captures the PS5’s full potential. Get ready to be enthralled with the breathtaking Boletaria as you fight strong opponents and learn the mysteries of this dark fantasy land
Miles Morales in Spider-Man: “
Take off in “Spider-Man: Miles Morales,” a thrilling superhero story that takes place in the colorful streets of New York City. In addition to stunning visuals and an engaging story, this stand-alone “Spider-Man” expansion offers the exhilaration of web-swinging across the well-known skyline. Accompany Miles Morales on his quest to succeed as Spider-Man and protect the city against fresh dangers.
Clank & Atchet: Rift Apart”
Prepare for an adventure beyond dimensions when you play “Ratchet & Clank: Rift Apart.” This action-packed platformer, with its smooth transitions between various dimensions, gorgeous graphics, and quick gameplay, perfectly displays the capabilities of the PS5. Accompany Ratchet, Clank, and a brand-new Lombax called Rivet as they engage in cross-multiverse combat with the evil Dr. Nefarious.
“Returnal” combines elements of roguelikes with an engrossing story to create a singular and difficult experience. As Selene, a space explorer imprisoned on an alien planet caught in a never-ending cycle of death and rebirth, set out on a mysterious adventure. For players looking for a fast-paced, unpredictable gaming experience, “Returnal” stands out thanks to its dynamic environments, fierce combat, and gripping narrative.
More than just a tech demo, “Astro’s Playroom” is a fun trip through the PlayStation universe that comes with every PS5. This endearing platformer offers a fun and engaging experience while showcasing the capabilities of the DualSense controller. Discover the joy of gaming with Astro Bot and venture into colorful worlds inspired by the past of PlayStation.
In conclusion, the PlayStation 5 establishes itself as a gaming powerhouse by providing a wide variety of experiences that suit the tastes of all gamers with these excellent titles.
The PS5 library includes games for all genres, including inventive platformers, superhero adventures, and difficult role-playing games. Immerse yourself in these games right now to experience previously unheard-of levels of immersion and excitement.
Enjoy your gaming and don’t forget to share your favorite PS5 games in the comments
Huawei Mate 60 Pro+ crowned King of Camera phones in DxOMark test
Huawei Mate 60 Pro+ crowned King of Camera phones in DxOMark test
The well-known Chinese smartphone manufacturer Huawei is returning to the market in spite of US restrictions. Its limitations on the chips it could use caused performance issues, but its camera quality has remained excellent. Following the Huawei P60 Pro to the top of the DxOMark rankings, the Huawei Mate 60 Pro+ demonstrated a powerful performance for flagship smartphones.
Specifically, the Huawei Mate 60 Pro+ has dominated the ultra-premium market, outperforming rivals such as the Oppo Find X6 Pro, Pixel 8 Pro, and iPhone 15 Pro Max. In the DxOMark camera test, it received 157 points, one point more than the Huawei P60 Pro. It has won the title in two categories: best Bokeh and Photos (primary camera performance for taking still photos in different lighting conditions). This is a more thorough analysis of its entire performance.
Huawei Mate 60 Pro+ DxOMark test overview
The Huawei Mate 60 Pro+ earns high marks in its DxOMark camera review which makes it great for all kinds of photos and videos in different lighting. Particularly impressive is its capability to deliver outstanding results for Friends & Family photos, ensuring the capture of moments with precision and accurate rendering of skin tones, even in challenging conditions.
The camera’s variable aperture is a noteworthy feature, automatically adjusting to the number of people in the scene, ensuring everyone remains in focus. The ultrawide camera excels in capturing expansive scenes while maintaining a high level of detail. Additionally, the camera performs admirably across all zoom distances, delivering a commendable level of detail. However, it does exhibit slight limitations in capturing videos in challenging low-light conditions, showing some visible constraints in such scenarios.
New firmware updates released for OnePlus Nord N20, Nord N30 and OnePlus 9
New firmware updates released for OnePlus Nord N20, Nord N30 and OnePlus 9
OnePlus recently rolled out new firmware updates for three of its popular devices: OnePlus Nord N20, Nord N30, and OnePlus 9. These updates aim to improve system security with the latest Android security patch November 2023.
While each update brings improvements specific tailored to the respective device, all contribute to improving the overall user experience. Let’s take a closer look at what each update entails.
OnePlus Nord N20
With firmware version CPH2459_11.C.17, OnePlus Nord N20 users in North America will receive the latest security patch. The update has begun and will gradually reach all eligible devices in the region. Along with system security improvements, users can expect improved performance and bug fixes.
OnePlus Nord N30
OnePlus Nord N30 users across North America can now install the incremental update, which ships with firmware version CPH2513_126.96.36.1992(EX01). Similar to the Nord N20 update, this release focuses on strengthening system security to ensure a safer user experience.
Additionally, users can anticipate optimizations that could improve the overall performance of the device.
OnePlus 9 Pro
OnePlus 9 Pro receives an exciting new firmware update with version LE2121_188.8.131.522(EX01). This update brings the November 2023 security patch, strengthening the security framework of the device. Once the Pro version update has been successfully rolled out, OnePlus 9 users can expect to receive a similar update shortly after.
As with any incremental update, deployment will take some time. Users should expect to receive updates gradually, with rollouts spanning a few days to a few weeks. However, users can manually check for update availability by accessing the System Settings on their devices.
Q: What is an incremental update?
An incremental update is a software update that introduces minor changes to an existing version of a device’s operating system. These updates typically focus on providing bug fixes, security improvements, and performance optimizations.
Q: How long will it take for updates to reach all devices?
The distribution of updates can vary, from a few days to a few weeks. It depends on factors such as device region, carrier involvement, and other considerations.
Q: Where can users report bugs or technical issues?
Users can report bugs and technical issues via the OnePlus community or contact OnePlus support directly. In India, users can also use Google Dialer by dialing *#800# to access additional support options.
These firmware updates from OnePlus demonstrate the company’s commitment to continuously improving the security of its devices and the overall user experience. Users can expect a more secure and enjoyable smartphone experience with these updates installed.
Source: OnePlus Insider