/fa/ - Fashion

Dress to Impress

SAVE THIS FILE: Anon.cafe Fallback File v1.1 (updated 2021-12-13)

Board Owners: Hourly thread limits and Early 404 help protect your boards against erasure under slide attacks. Enable them today.

Want your event posted here? Requests accepted in this /meta/ thread.

Max message length: 20000

Drag files to upload or
click here to select them

Maximum 5 files / Maximum size: 20.00 MB

Board Rules
More

(used to delete files and postings)


https://anoncafe.org https://anoncafe.co In case of fire a bunker will appear on sleepychan - zzzchan.xyz


Open file (318.89 KB 1280x1600 1291767.jpg)
/fa/g 05/12/2020 (Tue) 23:25:03 No.177
Researchers at Northwestern University have developed a garment designed to confuse digital surveillance algorithms into thinking you don’t exist >Normally, surveillance algorithms work by recognising a characteristic in an image, drawing a ‘bounding box’ around it, and assigning a label to that object. To interrupt this, the t-shirt uses colourful, pixelated patterns to confuse the technology into thinking you don’t exist. In other words, the clusters of pixels are placed to confuse the AI’s classification and labelling system, making it harder for it to map out your facial features. >“For the physical attacks, the real challenge is to remain undetected during the whole video duration,” says Battista Biggio, assistant professor at the University of Cagliari, and creator of the first adversarial example, which was successful in fooling spam email detection. “When detection is running in every frame, remaining consistently undetected is much harder.” >The researchers recorded a person walking while wearing a checkerboard pattern and tracked the corners of each of the board’s squares in order to accurately map out how it wrinkles when the person moves. Using this technique improved the ability to evade detection from 27 per cent to 63 per cent against YOLOv2, and from 11 per cent to 52 per cent against Faster R-CNN. >However, Lin says it’s unlikely that we’ll see these T-shirts in the real world. “We still have difficulties in making it work in the real world because there’s that strong assumption that we know everything about the detection algorithm,” she explains. “It’s not perfect, so there may be problems here or there.” >In fact, the researchers don’t actually want to help people evade surveillance technology at all. Instead, Lin says that the team’s ultimate goal is to find holes in neural networks so that surveillance firms can fix them, rather than to assist people in avoiding detection. “In the future, hopefully we can fix these problems, so the deep learning systems can’t be tricked.” tl;dr Make pixelated, bright colored, headache inducing shirts to avoid face detection. The people who are working on this aren't going to do it for you. They want to be able to detect you no matter what.
>>177 When are we getting a fashionable version of that? >The people who are working on this aren't going to do it for you. They want to be able to detect you no matter what. Very important to not forget that.
>>179 One of the guys on /ita/ works at a custom shirt manufacturer, you could ask them there
>>181 Thanks for the tip. First things first, though. Need to solidify the design and have it available at full quality to give to whomever it is that will ultimately print it. Once a solid design is had it will be easier to spread for anyone to make or alter as they see fit
>>182 Actually I haven't read all of what you posted, but I've worked for quite a while with this stuff and I can share some thoughts with you. Most likely facial recognition software uses a mix between real time classification (individual features like eyes, mouths etc.) and semantic segmentation (tracing an entire object as it moves through frames of a video like a CCTV feed as its own entity, i.e. figuring out two different people who are walking side by side) . The way these systems are trained (assuming they're Convolutional Neural Networks) is by >taking a huge amount of input data (generally speaking, big govts use social media apps to grab as many photos of people's faces, then a good chunk of them are analyzed and their overall class and/or features are marked by hand by a human specialist), >making the filters of said convnet learn the basic features of the pictures (to make it as babby simple as possible, think of a table: the first layer of filters analyzes all areas of high contrast in a picture, like sudden horizontal and vertical changes, which correspond to the base and the legs of a table; the second layer MIXES said basic filter outputs, i.e. it can find the CORNERS of the table; the third layer might find rounder creases on the wooden fillings of the table and so on and so forth), and >tuning the non-trainable parameters of the model to better fit to the problem at hand, which is to say, trying to fix eventual errors (or False Positives/Negatives for a given class or segment). Now, why did I have to type all this technological mumbo jumbo? Because at its core, the way these surveillance systems work is by being slightly more advanced neural networks with a human overseer to spot non-trivial features (for instance a human is needed to be able to tell a picture on an advertisement poster compared to a real life human posing near it). One way to trick the artificial aspect of the systems is, you guessed it, creating artificial contrast. Anti-surveillance makeup you see popping up from time to time in cyberpunk flicks are meant to do just that (usually they're straight black lines over the most important features like mouths or eyes). A more sophisticated means to achieve the same effect is to paint extra features like ears or eyes or the back of a person onto the target - the system tries real hard to find items that match certain features, fails and scores your features on low probabilities, meaning that you may still be found but they can't quite get your picture right. What you see here >>180 is also pretty ingenious, as these people have basically created pictures whose features blend in with each other, meaning that the convnets can still find them, but the data on them is so scarce that it misassigns a class to them. Now, convnets themselves are piss easy to build. All you need to do is have a pc with an off-the-shelf consumer gpu, python with tensorflow-gpu / keras installed, a bucketload of faces or people wearing teeshirts (super easy to find, there's a ton of Kaggle tournaments for both classification and semantic segmentation, one of them is bound to be about either) and a couple of days to train them properly. You instantiate either a custom made or a pre-built convnet, train it appropriately on the dataset following some tutorials, then test it out on pictures you may find on the internet. If the probability of that picture containing faces of a certain kind (i.e. side faces vs faces looking straight on) is high enough, the system will mark it as such. However, what you can do at that point is manually edit the picture to add or remove features and test it out on the same convnet, over and over, until you get real low levels of accuracy. That's when you know that your design or tatoos can easily fuck up a basic system. If you wanna learn more on this, I strongly suggest you study some Image Processing. It's eye opening how stupid the procedure is but how deep it affects our everyday lives.
>>183 Nice post, though as the op sort of mentioned, even if we manage to fool the machine, we won't fool the people, i can see such clothing/makeup becoming illegal in say the UK, because it aides terrorism or something, either that or people wearing this stuff get special focus. For this to truly change the game we'd need a lot of people to do it and i don't see that happening.
>>183 wew thank you for this knowledgeable and well written post. >anti surveillance makeup This is, from my memory, the original way that people figured out for fooling facial recognition. Along with ridiculous hair. Things like black triangles and white triangles above/below the eyes. With half my head shaved I imagine I could fool the recognition for side (profile) shots if I painted the bald part of my head with the odd geometry. >learn more about this You've already done a spectacular job spoonfeeding this information, which I am very grateful for, but I must ask if there's a particular book that you would suggest as a starting point, so that I may not need spoonfeeding again in the future? >>184 >won't fool the people This is true, and while I can't speak for the UK, I believe the government here would have a hard time justifying a legal ban on a shirt. >need a lot of people Agree. That is the ultimate goal here, for me at least. This is why I want to be able to achieve a working design; in order to disseminate such a design widely. The more people that are confident i.e. effay that wear it the more normal it will be considered. If attractive people start wearing it in public more will follow. This is a curve I intend to be ahead of, and luckily I do not appear to be alone in this desire. Thinking about it, for the short term I might try to get a "fake" one printed up, if only to start instilling in others the acceptance of this new, hideous design-type so that once a good pattern is achieved it will be more easily accepted by those that surround me day to day.
Interesting takes anons. The CyberP makey-cakey makes sense now. Thinking about people like native americans, it's interesting to see such facial paint making a sort of comeback. Could such makeup be remnants of a ancient technologically advanced past that got lost post-flood? Really makes your think
>>272 >CyberP makey-cakey wat
Open file (6.00 MB 640x352 obfuscation.MP4)
nice

Report/Delete/Moderation Forms
Delete
Report

no cookies?