Integrity
Write
Loading...
Enrique Dans

Enrique Dans

3 years ago

You may not know about The Merge, yet it could change society

More on Technology

Colin Faife

3 years ago

The brand-new USB Rubber Ducky is much riskier than before.

The brand-new USB Rubber Ducky is much riskier than before.

Corin Faife and Alex Castro

With its own programming language, the well-liked hacking tool may now pwn you.

With a vengeance, the USB Rubber Ducky is back.

This year's Def Con hacking conference saw the release of a new version of the well-liked hacking tool, and its author, Darren Kitchen, was on hand to explain it. We put a few of the new features to the test and discovered that the most recent version is riskier than ever.

WHAT IS IT?

The USB Rubber Ducky seems to the untrained eye to be an ordinary USB flash drive. However, when you connect it to a computer, the computer recognizes it as a USB keyboard and will accept keystroke commands from the device exactly like a person would type them in.

Kitchen explained to me, "It takes use of the trust model built in, where computers have been taught to trust a human, in that anything it types is trusted to the same degree as the user is trusted. And a computer is aware that clicks and keystrokes are how people generally connect with it.

The USB Rubber Ducky, a brainchild of Darren Kitchen Corin

Over ten years ago, the first Rubber Ducky was published, quickly becoming a hacker favorite (it was even featured in a Mr. Robot scene). Since then, there have been a number of small upgrades, but the most recent Rubber Ducky takes a giant step ahead with a number of new features that significantly increase its flexibility and capability.

WHERE IS ITS USE?

The options are nearly unlimited with the proper strategy.

The Rubber Ducky has already been used to launch attacks including making a phony Windows pop-up window to collect a user's login information or tricking Chrome into sending all saved passwords to an attacker's web server. However, these attacks lacked the adaptability to operate across platforms and had to be specifically designed for particular operating systems and software versions.

The nuances of DuckyScript 3.0 are described in a new manual. 

The most recent Rubber Ducky seeks to get around these restrictions. The DuckyScript programming language, which is used to construct the commands that the Rubber Ducky will enter into a target machine, receives a significant improvement with it. DuckyScript 3.0 is a feature-rich language that allows users to write functions, store variables, and apply logic flow controls, in contrast to earlier versions that were primarily limited to scripting keystroke sequences (i.e., if this... then that).

This implies that, for instance, the new Ducky can check to see if it is hooked into a Windows or Mac computer and then conditionally run code specific to each one, or it can disable itself if it has been attached to the incorrect target. In order to provide a more human effect, it can also generate pseudorandom numbers and utilize them to add a configurable delay between keystrokes.

The ability to steal data from a target computer by encoding it in binary code and transferring it through the signals intended to instruct a keyboard when the CapsLock or NumLock LEDs should light up is perhaps its most astounding feature. By using this technique, a hacker may plug it in for a brief period of time, excuse themselves by saying, "Sorry, I think that USB drive is faulty," and then take it away with all the credentials stored on it.

HOW SERIOUS IS THE RISK?

In other words, it may be a significant one, but because physical device access is required, the majority of people aren't at risk of being a target.

The 500 or so new Rubber Duckies that Hak5 brought to Def Con, according to Kitchen, were his company's most popular item at the convention, and they were all gone on the first day. It's safe to suppose that hundreds of hackers already possess one, and demand is likely to persist for some time.

Additionally, it has an online development toolkit that can be used to create attack payloads, compile them, and then load them onto the target device. A "payload hub" part of the website makes it simple for hackers to share what they've generated, and the Hak5 Discord is also busy with conversation and helpful advice. This makes it simple for users of the product to connect with a larger community.

It's too expensive for most individuals to distribute in volume, so unless your favorite cafe is renowned for being a hangout among vulnerable targets, it's doubtful that someone will leave a few of them there. To that end, if you intend to plug in a USB device that you discovered outside in a public area, pause to consider your decision.

WOULD IT WORK FOR ME?

Although the device is quite straightforward to use, there are a few things that could cause you trouble if you have no prior expertise writing or debugging code. For a while, during testing on a Mac, I was unable to get the Ducky to press the F4 key to activate the launchpad, but after forcing it to identify itself using an alternative Apple keyboard device ID, the problem was resolved.

From there, I was able to create a script that, when the Ducky was plugged in, would instantly run Chrome, open a new browser tab, and then immediately close it once more without requiring any action from the laptop user. Not bad for only a few hours of testing, and something that could be readily changed to perform duties other than reading technology news.

Tom Smykowski

Tom Smykowski

2 years ago

CSS Scroll-linked Animations Will Transform The Web's User Experience

We may never tap again in ten years.

I discussed styling websites and web apps on smartwatches in my earlier article on W3C standardization.

The Parallax Chronicles

Section containing examples and flying objects

Another intriguing Working Draft I found applies to all devices, including smartphones.

These pages may have something intriguing. Take your time. Return after scrolling:

What connects these three pages?

JustinWick at English Wikipedia • CC-BY-SA-3.0

Scroll-linked animation, commonly called parallax, is the effect.

WordPress theme developers' quick setup and low-code tools made the effect popular around 2014.

Parallax: Why Designers Love It

The chapter that your designer shouldn't read

Online video playback required searching, scrolling, and clicking ten years ago. Scroll and click four years ago.

Some video sites let you swipe to autoplay the next video from an endless list.

UI designers create scrollable pages and apps to accommodate the behavioral change.

Web interactivity used to be mouse-based. Clicking a button opened a help drawer, and hovering animated it.

However, a large page with more material requires fewer buttons and less interactiveness.

Designers choose scroll-based effects. Design and frontend developers must fight the trend but prepare for the worst.

How to Create Parallax

The component that you might want to show the designer

JavaScript-based effects track page scrolling and apply animations.

Javascript libraries like lax.js simplify it.

Using it needs a lot of human mathematical and physical computations.

Your asset library must also be prepared to display your website on a laptop, television, smartphone, tablet, foldable smartphone, and possibly even a microwave.

Overall, scroll-based animations can be solved better.

CSS Scroll-linked Animations

CSS makes sense since it's presentational. A Working Draft has been laying the groundwork for the next generation of interactiveness.

The new CSS property scroll-timeline powers the feature, which MDN describes well.

Before testing it, you should realize it is poorly supported:

Firefox 103 currently supports it.

There is also a polyfill, with some demo examples to explore.

Summary

Web design was a protracted process. Started with pages with static backdrop images and scrollable text. Artists and designers may use the scroll-based animation CSS API to completely revamp our web experience.

It's a promising frontier. This post may attract a future scrollable web designer.

Ps. I have created flashcards for HTML, Javascript etc. Check them out!

Dmitrii Eliuseev

Dmitrii Eliuseev

2 years ago

Creating Images on Your Local PC Using Stable Diffusion AI

Deep learning-based generative art is being researched. As usual, self-learning is better. Some models, like OpenAI's DALL-E 2, require registration and can only be used online, but others can be used locally, which is usually more enjoyable for curious users. I'll demonstrate the Stable Diffusion model's operation on a standard PC.

Image generated by Stable Diffusion 2.1

Let’s get started.

What It Does

Stable Diffusion uses numerous components:

  • A generative model trained to produce images is called a diffusion model. The model is incrementally improving the starting data, which is only random noise. The model has an image, and while it is being trained, the reversed process is being used to add noise to the image. Being able to reverse this procedure and create images from noise is where the true magic is (more details and samples can be found in the paper).

  • An internal compressed representation of a latent diffusion model, which may be altered to produce the desired images, is used (more details can be found in the paper). The capacity to fine-tune the generation process is essential because producing pictures at random is not very attractive (as we can see, for instance, in Generative Adversarial Networks).

  • A neural network model called CLIP (Contrastive Language-Image Pre-training) is used to translate natural language prompts into vector representations. This model, which was trained on 400,000,000 image-text pairs, enables the transformation of a text prompt into a latent space for the diffusion model in the scenario of stable diffusion (more details in that paper).

This figure shows all data flow:

Model architecture, Source © https://arxiv.org/pdf/2112.10752.pdf

The weights file size for Stable Diffusion model v1 is 4 GB and v2 is 5 GB, making the model quite huge. The v1 model was trained on 256x256 and 512x512 LAION-5B pictures on a 4,000 GPU cluster using over 150.000 NVIDIA A100 GPU hours. The open-source pre-trained model is helpful for us. And we will.

Install

Before utilizing the Python sources for Stable Diffusion v1 on GitHub, we must install Miniconda (assuming Git and Python are already installed):

wget https://repo.anaconda.com/miniconda/Miniconda3-py39_4.12.0-Linux-x86_64.sh
chmod +x Miniconda3-py39_4.12.0-Linux-x86_64.sh
./Miniconda3-py39_4.12.0-Linux-x86_64.sh
conda update -n base -c defaults conda

Install the source and prepare the environment:

git clone https://github.com/CompVis/stable-diffusion
cd stable-diffusion
conda env create -f environment.yaml
conda activate ldm
pip3 install transformers --upgrade

Download the pre-trained model weights next. HiggingFace has the newest checkpoint sd-v14.ckpt (a download is free but registration is required). Put the file in the project folder and have fun:

python3 scripts/txt2img.py --prompt "hello world" --plms --ckpt sd-v1-4.ckpt --skip_grid --n_samples 1

Almost. The installation is complete for happy users of current GPUs with 12 GB or more VRAM. RuntimeError: CUDA out of memory will occur otherwise. Two solutions exist.

Running the optimized version

Try optimizing first. After cloning the repository and enabling the environment (as previously), we can run the command:

python3 optimizedSD/optimized_txt2img.py --prompt "hello world" --ckpt sd-v1-4.ckpt --skip_grid --n_samples 1

Stable Diffusion worked on my visual card with 8 GB RAM (alas, I did not behave well enough to get NVIDIA A100 for Christmas, so 8 GB GPU is the maximum I have;).

Running Stable Diffusion without GPU

If the GPU does not have enough RAM or is not CUDA-compatible, running the code on a CPU will be 20x slower but better than nothing. This unauthorized CPU-only branch from GitHub is easiest to obtain. We may easily edit the source code to use the latest version. It's strange that a pull request for that was made six months ago and still hasn't been approved, as the changes are simple. Readers can finish in 5 minutes:

  • Replace if attr.device!= torch.device(cuda) with if attr.device!= torch.device(cuda) and torch.cuda.is available at line 20 of ldm/models/diffusion/ddim.py ().

  • Replace if attr.device!= torch.device(cuda) with if attr.device!= torch.device(cuda) and torch.cuda.is available in line 20 of ldm/models/diffusion/plms.py ().

  • Replace device=cuda in lines 38, 55, 83, and 142 of ldm/modules/encoders/modules.py with device=cuda if torch.cuda.is available(), otherwise cpu.

  • Replace model.cuda() in scripts/txt2img.py line 28 and scripts/img2img.py line 43 with if torch.cuda.is available(): model.cuda ().

Run the script again.

Testing

Test the model. Text-to-image is the first choice. Test the command line example again:

python3 scripts/txt2img.py --prompt "hello world" --plms --ckpt sd-v1-4.ckpt --skip_grid --n_samples 1

The slow generation takes 10 seconds on a GPU and 10 minutes on a CPU. Final image:

The SD V1.4 first example, Image by the author

Hello world is dull and abstract. Try a brush-wielding hamster. Why? Because we can, and it's not as insane as Napoleon's cat. Another image:

The SD V1.4 second example, Image by the author

Generating an image from a text prompt and another image is interesting. I made this picture in two minutes using the image editor (sorry, drawing wasn't my strong suit):

An image sketch, Image by the author

I can create an image from this drawing:

python3 scripts/img2img.py --prompt "A bird is sitting on a tree branch" --ckpt sd-v1-4.ckpt --init-img bird.png --strength 0.8

It was far better than my initial drawing:

The SD V1.4 third example, Image by the author

I hope readers understand and experiment.

Stable Diffusion UI

Developers love the command line, but regular users may struggle. Stable Diffusion UI projects simplify image generation and installation. Simple usage:

  • Unpack the ZIP after downloading it from https://github.com/cmdr2/stable-diffusion-ui/releases. Linux and Windows are compatible with Stable Diffusion UI (sorry for Mac users, but those machines are not well-suitable for heavy machine learning tasks anyway;).

  • Start the script.

Done. The web browser UI makes configuring various Stable Diffusion features (upscaling, filtering, etc.) easy:

Stable Diffusion UI © Image by author

V2.1 of Stable Diffusion

I noticed the notification about releasing version 2.1 while writing this essay, and it was intriguing to test it. First, compare version 2 to version 1:

  • alternative text encoding. The Contrastive LanguageImage Pre-training (CLIP) deep learning model, which was trained on a significant number of text-image pairs, is used in Stable Diffusion 1. The open-source CLIP implementation used in Stable Diffusion 2 is called OpenCLIP. It is difficult to determine whether there have been any technical advancements or if legal concerns were the main focus. However, because the training datasets for the two text encoders were different, the output results from V1 and V2 will differ for the identical text prompts.

  • a new depth model that may be used to the output of image-to-image generation.

  • a revolutionary upscaling technique that can quadruple the resolution of an image.

  • Generally higher resolution Stable Diffusion 2 has the ability to produce both 512x512 and 768x768 pictures.

The Hugging Face website offers a free online demo of Stable Diffusion 2.1 for code testing. The process is the same as for version 1.4. Download a fresh version and activate the environment:

conda deactivate  
conda env remove -n ldm  # Use this if version 1 was previously installed
git clone https://github.com/Stability-AI/stablediffusion
cd stablediffusion
conda env create -f environment.yaml
conda activate ldm

Hugging Face offers a new weights ckpt file.

The Out of memory error prevented me from running this version on my 8 GB GPU. Version 2.1 fails on CPUs with the slow conv2d cpu not implemented for Half error (according to this GitHub issue, the CPU support for this algorithm and data type will not be added). The model can be modified from half to full precision (float16 instead of float32), however it doesn't make sense since v1 runs up to 10 minutes on the CPU and v2.1 should be much slower. The online demo results are visible. The same hamster painting with a brush prompt yielded this result:

A Stable Diffusion 2.1 example

It looks different from v1, but it functions and has a higher resolution.

The superresolution.py script can run the 4x Stable Diffusion upscaler locally (the x4-upscaler-ema.ckpt weights file should be in the same folder):

python3 scripts/gradio/superresolution.py configs/stable-diffusion/x4-upscaling.yaml x4-upscaler-ema.ckpt

This code allows the web browser UI to select the image to upscale:

The copy-paste strategy may explain why the upscaler needs a text prompt (and the Hugging Face code snippet does not have any text input as well). I got a GPU out of memory error again, although CUDA can be disabled like v1. However, processing an image for more than two hours is unlikely:

Stable Diffusion 4X upscaler running on CPU © Image by author

Stable Diffusion Limitations

When we use the model, it's fun to see what it can and can't do. Generative models produce abstract visuals but not photorealistic ones. This fundamentally limits The generative neural network was trained on text and image pairs, but humans have a lot of background knowledge about the world. The neural network model knows nothing. If someone asks me to draw a Chinese text, I can draw something that looks like Chinese but is actually gibberish because I never learnt it. Generative AI does too! Humans can learn new languages, but the Stable Diffusion AI model includes only language and image decoder brain components. For instance, the Stable Diffusion model will pull NO WAR banner-bearers like this:

V1:

V2.1:

The shot shows text, although the model never learned to read or write. The model's string tokenizer automatically converts letters to lowercase before generating the image, so typing NO WAR banner or no war banner is the same.

I can also ask the model to draw a gorgeous woman:

V1:

V2.1:

The first image is gorgeous but physically incorrect. A second one is better, although it has an Uncanny valley feel. BTW, v2 has a lifehack to add a negative prompt and define what we don't want on the image. Readers might try adding horrible anatomy to the gorgeous woman request.

If we ask for a cartoon attractive woman, the results are nice, but accuracy doesn't matter:

V1:

V2.1:

Another example: I ordered a model to sketch a mouse, which looks beautiful but has too many legs, ears, and fingers:

V1:

V2.1: improved but not perfect.

V1 produces a fun cartoon flying mouse if I want something more abstract:

I tried multiple times with V2.1 but only received this:

The image is OK, but the first version is closer to the request.

Stable Diffusion struggles to draw letters, fingers, etc. However, abstract images yield interesting outcomes. A rural landscape with a modern metropolis in the background turned out well:

V1:

V2.1:

Generative models help make paintings too (at least, abstract ones). I searched Google Image Search for modern art painting to see works by real artists, and this was the first image:

“Modern art painting” © Google’s Image search result

I typed "abstract oil painting of people dancing" and got this:

V1:

V2.1:

It's a different style, but I don't think the AI-generated graphics are worse than the human-drawn ones.

The AI model cannot think like humans. It thinks nothing. A stable diffusion model is a billion-parameter matrix trained on millions of text-image pairs. I input "robot is creating a picture with a pen" to create an image for this post. Humans understand requests immediately. I tried Stable Diffusion multiple times and got this:

This great artwork has a pen, robot, and sketch, however it was not asked. Maybe it was because the tokenizer deleted is and a words from a statement, but I tried other requests such robot painting picture with pen without success. It's harder to prompt a model than a person.

I hope Stable Diffusion's general effects are evident. Despite its limitations, it can produce beautiful photographs in some settings. Readers who want to use Stable Diffusion results should be warned. Source code examination demonstrates that Stable Diffusion images feature a concealed watermark (text StableDiffusionV1 and SDV2) encoded using the invisible-watermark Python package. It's not a secret, because the official Stable Diffusion repository's test watermark.py file contains a decoding snippet. The put watermark line in the txt2img.py source code can be removed if desired. I didn't discover this watermark on photographs made by the online Hugging Face demo. Maybe I did something incorrectly (but maybe they are just not using the txt2img script on their backend at all).

Conclusion

The Stable Diffusion model was fascinating. As I mentioned before, trying something yourself is always better than taking someone else's word, so I encourage readers to do the same (including this article as well;).

Is Generative AI a game-changer? My humble experience tells me:

  • I think that place has a lot of potential. For designers and artists, generative AI can be a truly useful and innovative tool. Unfortunately, it can also pose a threat to some of them since if users can enter a text field to obtain a picture or a website logo in a matter of clicks, why would they pay more to a different party? Is it possible right now? unquestionably not yet. Images still have a very poor quality and are erroneous in minute details. And after viewing the image of the stunning woman above, models and fashion photographers may also unwind because it is highly unlikely that AI will replace them in the upcoming years.

  • Today, generative AI is still in its infancy. Even 768x768 images are considered to be of a high resolution when using neural networks, which are computationally highly expensive. There isn't an AI model that can generate high-resolution photographs natively without upscaling or other methods, at least not as of the time this article was written, but it will happen eventually.

  • It is still a challenge to accurately represent knowledge in neural networks (information like how many legs a cat has or the year Napoleon was born). Consequently, AI models struggle to create photorealistic photos, at least where little details are important (on the other side, when I searched Google for modern art paintings, the results are often even worse;).

  • When compared to the carefully chosen images from official web pages or YouTube reviews, the average output quality of a Stable Diffusion generation process is actually less attractive because to its high degree of randomness. When using the same technique on their own, consumers will theoretically only view those images as 1% of the results.

Anyway, it's exciting to witness this area's advancement, especially because the project is open source. Google's Imagen and DALL-E 2 can also produce remarkable findings. It will be interesting to see how they progress.

You might also like

INTΞGRITY team

INTΞGRITY team

3 years ago

Terms of Service

Effective: August 31, 2022

These Terms of Service ("Terms") govern your access to and use of INTΞGRITY’s (or "we") websites, mobile applications, and other online products and services (collectively, the "Services"). By clicking your assent (e.g. "Continue," "Sign-in," or "Sign-up") or by utilizing our Services, you consent to these Terms, including the mandatory arbitration provision and class action waiver in the Resolving Disputes; Binding Arbitration Section.

Our Privacy Policy describes how we gather and utilize your information, while our Rules detail your duties when utilizing our Services. You agree to be bound by these Terms and our Rules by utilizing our Services. Please refer to our Privacy Statement for details on how we collect, utilize, disclose, and otherwise manage your information.

Please contact us at hello@int3grity.com if you have any queries regarding these Terms or our Services.

Account Details and Responsibilities

You are responsible for your use of the Services and any content you contribute, including compliance with all relevant laws. The Services may host content that is protected by the intellectual property rights of third parties. Please do not copy, post, download, or distribute content without permission.

You must adhere to our Rules when using the Services.

To use any or all of our services, you may need to register for an account. Contribute to the protection of your account. Protect your account's password, and maintain accurate account details. We advise you not to share your password with anyone else.

If you are accepting these Terms and using the Services on behalf of someone else (such as another person or entity), you confirm that you are allowed to do so, and the words "you" or "your" in these Terms refer to that other person or entity.

You must be at least 13 years old to access our services.

If you use the Services to access, collect, or otherwise utilize the personal information of other INTΞGRITY users ("Personal Information"), you agree to comply with all applicable laws. You also undertake not to sell any Personal Information, where "sell" has the meaning ascribed to it by relevant legislation.

For Personal Information you provide to us (as a Newsletter Editor, for example), you represent and warrant that you have lawfully collected the Personal Information and that you or a third party have provided all required notices and obtained all required consents prior to collecting the Personal Information. You further represent and warrant that INTΞGRITY’s use of such Personal Information in accordance with the purposes for which you provided the Personal Information will not violate, misappropriate, or infringe any rights of a third party (including intellectual property rights or privacy rights) or cause us to violate any applicable laws.

The Services' User Content

INTΞGRITY may monitor your conduct and material for compliance with these Terms and our Rules, and reserves the right to remove any content that violates these guidelines.

INTΞGRITY maintains the right to remove or disable content that is accused to violate the intellectual property rights of others, as well as to cancel the accounts of repeat infringers. We respond to notifications of alleged copyright violations if they comply with the law; please report such notices using our Copyright Policy.

Ownership and Rights

You maintain ownership of all content that you submit, upload, or display on or through the Services.

By submitting, posting, or displaying content on or through the Services, unless otherwise agreed in writing, you grant INTΞGRITY a nonexclusive, royalty-free, worldwide, fully paid, and sublicensable license to use, reproduce, modify, adapt, publish, translate, create derivative works from, distribute, publicly perform and display your content and any name, username or likeness provided in connection with your content in all media formats and distribution methods now known or later developed.

INTΞGRITY requires this license because you are the owner of your material, and INTΞGRITY cannot show it across its multiple platforms (mobile, online) without your consent.

This type of license is also required for content distribution throughout our Services. For example, you may publish a piece on INTΞGRITY. It is duplicated as versions on both our website and app, and distributed to many locations on INTΞGRITY, including the homepage and reading lists. A tweak could be that we display a fragment of your work as a preview (rather than the entire post), with attribution. An example of a derivative work might be a list of top authors or quotations on INTΞGRITY that includes chunks of your article, again with full attribution. This license solely applies to our Services and does not grant us permissions outside of our Services.

So long as you comply with these Terms, INTΞGRITY grants you a limited, non-exclusive, personal, and non-transferable license to access and utilize our Services.

Copyright, trademark, and other United States and international laws protect the Services. These Terms do not grant you any right, title, or interest in the Services, the material posted by other users on the Services, or INTΞGRITY’s trademarks, logos, or other brand characteristics.

In addition to the content you submit, post, or display on our Services, we appreciate your feedback, which may include your thoughts, ideas, and suggestions regarding our Services. This input may be used for any reason at our sole discretion and without obligation to you. We may treat your comments as non-confidential.

We reserve the right, at our sole discretion, to discontinue the Services or any of its features. In addition, we reserve the right to impose limits on use and storage, and to remove or restrict the distribution of content on the Services.

Termination

You are allowed to terminate your use of our services at any time. We have the right to stop or cancel your use of the Services with or without notice.

Moving and Processing Information

To enable us to deliver our Services, you accept that we may handle, transfer, and retain information about you in the United States and other countries, where you may not enjoy the same rights and protections as you do under local law.

Indemnification

To the maximum extent permitted by applicable law, you will indemnify, defend, and hold harmless INTΞGRITY, and our officers, directors, agents, partners, and employees (collectively, the "INTΞGRITY Parties"), from and against any losses, liabilities, claims, demands, damages, expenses or costs ("Claims") arising out of or relating to your violation, misappropriation, or infringement of any rights of another (including intellectual property rights or privacy rights). You undertake to promptly notify INTΞGRITY Parties of any third-party Claims, to assist INTΞGRITY Parties in fighting such Claims, and to pay any fees, charges, and expenses connected with defending such Claims (including attorneys' fees). You further agree that, at INTΞGRITY’s sole discretion, the INTΞGRITY Parties will govern the defense or settlement of any third-party Claims.

Disclaimers — Services Provided "As Is"

INTΞGRITY strives to provide you with excellent Services, but there are certain things we cannot guarantee. Utilization of our services is at your own risk. You acknowledge that our Services and any content uploaded or shared by users on the Services are given "as is" and "as available" without explicit or implied warranties of any kind, including warranties of merchantability, fitness for a particular purpose, title, and non-infringement. In addition, INTΞGRITY does not represent or promise that our Services are accurate, comprehensive, dependable, up-to-date, or error-free. No advice or information gained from INTΞGRITY or via the Services shall create any warranty or representation unless expressly set forth in this section. INTΞGRITY may provide information on third-party products, services, activities, or events, or we may permit third parties to make their material and information accessible via our Services (collectively, "Third-Party Content"). We neither control nor endorse any Third-Party Content, nor do we make any claims or warranties about it. Accessing and utilizing Third-Party Content is at your own risk. The disclaimers in this section may not apply to you if they are prohibited in your location.

Limitation of Liability

We do not exclude or limit our obligation to you where it would be unlawful to do so; this includes any liability for the gross negligence, fraud, or willful misconduct of INTΞGRITY or the other INTΞGRITY Parties in providing the Services. In jurisdictions where the foregoing exclusions are not permitted, our liability to you is limited to losses and damages that are reasonably foreseeable as a result of our failure to exercise reasonable care and skill or breach of contract with you. This paragraph does not impact consumer rights that cannot be waived or limited by contract.

In jurisdictions that permit liability exclusions or limits, INTΞGRITY and INTΞGRITY Parties will not be liable for:

(a) Any indirect, consequential, exemplary, incidental, punitive, or extraordinary damages, or any loss of use, data, or profits, based on any legal theory, even if INTΞGRITY or the other INTΞGRITY Parties were advised of the potential of such damages.

(b) Except for the types of liability we cannot limit by law (as described in this section), we limit the total liability of INTΞGRITY and the other INTΞGRITY Parties for any claim arising out of or related to these Terms or our Services, regardless of the form of action, to $100.00 USD.

Arbitration; Resolution of Disputes

We intend to address your concerns without filing a formal lawsuit. Before making a claim against INTΞGRITY, you agree to contact us and attempt to resolve the dispute informally by emailing hello@int3grity.com or by sending certified mail to INTΞGRITY, P.O. JOY, 479 Jessie St, San Francisco, CA 94103. The notice must (a) contain your name, address, email address, and telephone number; (b) identify the nature and grounds of the claim; and (c) detail the relief requested. Our notice to you will be sent to the email address linked with your online account and will contain the information specified in the preceding section. Any party may commence a formal procedure if we are unable to reach a resolution within thirty (30) days of the date of any notice.

Please read the following section carefully because it compels you to arbitrate certain claims and disputes with INTΞGRITY and limits the method in which you can seek redress from us, unless you opt out of arbitration by following the steps provided below. This arbitration provision does not permit class or representative lawsuits or arbitrations. In addition, arbitration prohibits you from filing a lawsuit or having a jury trial.

(a) Absence of Representative Actions You and INTΞGRITY agree that any dispute arising out of or relating to these Terms or our Services is personal to you and INTΞGRITY and will be resolved entirely via individual action, and not by class arbitration, class action, or other representative procedure.

(b) Dispute Arbitration. Except for small claims disputes in which you or INTΞGRITY seeks to bring an individual action in small claims court located in the county where you reside and disputes in which you or INTΞGRITY seeks injunctive or other equitable relief for the alleged infringement or misappropriation of intellectual property, you and INTΞGRITY waive your rights to a jury trial and to have any other dispute arising out of or relating to these Terms or our Services, including claims related to privity of contract, decided by a jury. All Disputes submitted to JAMS shall be decided by confidential, binding arbitration before a single arbitrator. If you are a consumer, you may choose to have the arbitration in your county of residence. A "consumer" is a person who uses the Services for personal, family, or household purposes for the purposes of this provision. You and INTΞGRITY agree that Disputes shall be resolved using the JAMS Streamlined Arbitration Rules and Procedures ("JAMS Rules"). The latest version of the JAMS Rules is accessible on the JAMS website and is incorporated herein by reference. Either you accept and agree that you have read and comprehended the JAMS Rules or you forfeit your right to read the JAMS Rules and any claim that the JAMS Rules are unreasonable or should not apply for any reason.

(c) You and INTΞGRITY agree that these Terms affect interstate commerce and that the enforceability of this provision is subject to the Federal Arbitration Act, 9 U.S.C. 1 et seq. (the "FAA"), to the maximum extent permissible by applicable law. As limited by the FAA, these Terms, and the JAMS Rules, the arbitrator will have sole authority to make all procedural and substantive judgments regarding any Dispute, and to grant any remedy that would otherwise be available in court, including the authority to determine arbitrability. The arbitrator may only conduct an individual arbitration and may not consolidate the claims of more than one party, preside over any sort of class or representative procedure, or preside over any proceeding involving more than one party.

d) The arbitration will permit the discovery or exchange of nonconfidential information pertinent to the Dispute. The arbitrator, INTΞGRITY, and you will maintain the confidentiality of all arbitration proceedings, judgments, and awards, as well as any information gathered, prepared, or presented for the purposes of the arbitration or relating to the Dispute(s) therein. Unless the law specifies otherwise, the arbitrator will have the right to make decisions that protect confidentiality. The duty of confidentiality does not apply where disclosure is required to prepare for or conduct the arbitration hearing on the merits, in connection with a court application for a preliminary remedy, in connection with a judicial challenge to an arbitration award or its enforcement, or where disclosure is otherwise required by law or judicial decision.

e) You and INTΞGRITY agree that for any arbitration you begin, you will pay the filing fee (up to $250 if you are a consumer) and INTΞGRITY will pay the remaining JAMS fees and costs. INTΞGRITY will pay all JAMS fees and costs for any and all arbitrations it initiates. You and INTΞGRITY agree that the state and federal courts of California and the United States located in San Francisco have exclusive jurisdiction over any appeals and the implementation of an arbitration award.

(f) Any Dispute must be filed within one year after the relevant claim arose; otherwise, the Dispute is permanently barred, meaning that neither you nor INTΞGRITY will be able to assert the claim.

(g) You have the right to opt-out of binding arbitration within 30 days of the date you initially accepted the terms of this section by sending an email to hello@int3grity.com. For the opt-out notification to be effective, it must include your full name and address and clearly explain your intent to opt out of binding arbitration. By declining binding arbitration, you consent to the resolution of Disputes in accordance with "Governing Law and Venue" below.

(h) If any portion of this section is found to be unenforceable or unlawful for any reason: (1) the unenforceable or unlawful provision shall be severed from these Terms; (2) the severance of the unenforceable or unlawful provision shall have no effect whatsoever on the remainder of this section or the parties' ability to compel arbitration of any remaining claims on an individual basis pursuant to this section; and (3) to the extent that any claims must therefore proceed on an individual basis, the parties agree to arbitrate those claims on an individual basis. In addition, if it is determined that any portion of this section prohibits an individual claim seeking public injunctive relief, that provision will be null and void to the extent that such relief may be sought outside of arbitration, and the balance of this section will be enforceable.

Statute and Location

These Terms and any dispute that may arise between you and INTΞGRITY are governed by California law, excluding its conflict of law provisions. Any issue between the parties that is not arbitrable or cannot be heard in small claims court will be determined by the state or federal courts of California and the United States, sitting in San Francisco, California.

Some nations have regulations that require agreements to be controlled by the consumer's country's laws. These statutes are not overridden by this paragraph.

Amendments

Periodically, we may make modifications to these Terms. If we make modifications, we will notify you by sending an email to the address connected with your account, providing an in-product message, or amending the date at the top of these Terms. Unless we specify otherwise in our notification, the modified Terms will take effect immediately, and your continued use of our Services after we issue such notice indicates your acceptance of the changes. If you do not accept the updated Terms, you must cease using our services.

Severability

If any section or portion of a provision of these Terms is determined to be unlawful, void, or unenforceable, that provision or part of the provision shall be deemed severable from these Terms and shall not affect the validity and enforceability of the other terms.

Miscellaneous INTΞGRITY’s omission to assert or enforce any right or term of these Terms is not a waiver of such right or provision. These Terms and the terms and policies specified in the Other Terms and Policies that May Apply to You Section constitute the complete agreement between the parties pertaining to the subject matter hereof and supersede all prior agreements, statements, and understandings between the parties. The section headings in these Terms are for convenience only and have no legal or contractual significance. The use of the word "including" shall be taken to mean "including without limitation." Unless otherwise specified, these Terms are intended solely for the benefit of the parties and are not intended to confer third-party beneficiary rights on any other person or entity. You consent to the use of electronic means for our communications and transactions.

Adrien Book

Adrien Book

3 years ago

What is Vitalik Buterin's newest concept, the Soulbound NFT?

Decentralizing Web3's soul

Our tech must reflect our non-transactional connections. Web3 arose from a lack of social links. It must strengthen these linkages to get widespread adoption. Soulbound NFTs help.

This NFT creates digital proofs of our social ties. It embodies G. Simmel's idea of identity, in which individuality emerges from social groups, just as social groups evolve from people.

It's multipurpose. First, gather online our distinctive social features. Second, highlight and categorize social relationships between entities and people to create a spiderweb of networks.

1. 🌐 Reducing online manipulation: Only socially rich or respectable crypto wallets can participate in projects, ensuring that no one can create several wallets to influence decentralized project governance.

2. 🤝 Improving social links: Some sectors of society lack social context. Racism, sexism, and homophobia do that. Public wallets can help identify and connect distinct social groupings.

3. 👩‍❤️‍💋‍👨 Increasing pluralism: Soulbound tokens can ensure that socially connected wallets have less voting power online to increase pluralism. We can also overweight a minority of numerous voices.

4. 💰Making more informed decisions: Taking out an insurance policy requires a life review. Why not loans? Character isn't limited by income, and many people need a chance.

5. 🎶 Finding a community: Soulbound tokens are accessible to everyone. This means we can find people who are like us but also different. This is probably rare among your friends and family.

NFTs are dangerous, and I don't like them. Social credit score, privacy, lost wallet. We must stay informed and keep talking to innovators.

E. Glen Weyl, Puja Ohlhaver and Vitalik Buterin get all the credit for these ideas, having written the very accessible white paper “Decentralized Society: Finding Web3’s Soul”.

Yogita Khatri

Yogita Khatri

3 years ago

Moonbirds NFT sells for $1 million in first week

On Saturday, Moonbird #2642, one of the collection's rarest NFTs, sold for a record 350 ETH (over $1 million) on OpenSea.

The Sandbox, a blockchain-based gaming company based in Hong Kong, bought the piece. The seller, "oscuranft" on OpenSea, made around $600,000 after buying the NFT for 100 ETH a week ago.

Owl avatars

Moonbirds is a 10,000 owl NFT collection. It is one of the quickest collections to achieve bluechip status. Proof, a media startup founded by renowned VC Kevin Rose, launched Moonbirds on April 16.

Rose is currently a partner at True Ventures, a technology-focused VC firm. He was a Google Ventures general partner and has 1.5 million Twitter followers.

Rose has an NFT podcast on Proof. It follows Proof Collective, a group of 1,000 NFT collectors and artists, including Beeple, who hold a Proof Collective NFT and receive special benefits.

These include early access to the Proof podcast and in-person events.

According to the Moonbirds website, they are "the official Proof PFP" (picture for proof).

Moonbirds NFTs sold nearly $360 million in just over a week, according to The Block Research and Dune Analytics. Its top ten sales range from $397,000 to $1 million.

In the current market, Moonbirds are worth 33.3 ETH. Each NFT is 2.5 ETH. Holders have gained over 12 times in just over a week.

Why was it so popular?

The Block Research's NFT analyst, Thomas Bialek, attributes Moonbirds' rapid rise to Rose's backing, the success of his previous Proof Collective project, and collectors' preference for proven NFT projects.

Proof Collective NFT holders have made huge gains. These NFTs were sold in a Dutch auction last December for 5 ETH each. According to OpenSea, the current floor price is 109 ETH.

According to The Block Research, citing Dune Analytics, Proof Collective NFTs have sold over $39 million to date.

Rose has bigger plans for Moonbirds. Moonbirds is introducing "nesting," a non-custodial way for holders to stake NFTs and earn rewards.

Holders of NFTs can earn different levels of status based on how long they keep their NFTs locked up.

"As you achieve different nest status levels, we can offer you different benefits," he said. "We'll have in-person meetups and events, as well as some crazy airdrops planned."

Rose went on to say that Proof is just the start of "a multi-decade journey to build a new media company."