tZERO’s Blockchain Securities Platform Sets Sights on 2026 Public Listing
Wall Street's digital revolution charges forward as tZERO announces its 2026 IPO timeline.
SECURITIES GOING CHAIN
The blockchain-based trading platform—already shaking up traditional finance with tokenized assets—plans to take its own disruption public. tZERO's move represents the ultimate validation play: becoming the very traditional security it seeks to disrupt.
FROM DISRUPTOR TO LISTED ASSET
While legacy exchanges grapple with blockchain adoption, tZERO builds its own runway. The platform's 2026 target positions it ahead of broader institutional crypto adoption curves. Because nothing says 'revolution' like filing S-1 paperwork and appeasing shareholder demands.
Wall Street's future may be decentralized—but its IPOs remain firmly in the hands of underwriters and lawyers.
Anthropic, an AI safety and research company, has announced significant updates to its Consumer Terms and Privacy Policy. These changes are designed to enhance the capability and safety of its AI models, such as Claude, by offering users more control over their data usage, according to Anthropic.
Data Usage and User Control
With the new updates, users of Anthropic's Claude Free, Pro, and Max plans can choose whether their data is used to improve AI models and strengthen safeguards against harmful activities. This option, however, does not extend to services under the company's Commercial Terms, such as Claude for Work or API usage through platforms like Amazon Bedrock and Google Cloud’s Vertex AI.
Users are encouraged to participate in this initiative to help refine model safety and accuracy, particularly in detecting harmful content and improving coding, analysis, and reasoning skills. New users will make this choice during the signup process, while existing users will be prompted to select their preferences via an in-app notification.
Data Retention Policy
Anthropic is also extending its data retention period to five years for users who opt to allow their data to be utilized in model training. This extended retention is applicable only to new or resumed chats and coding sessions, allowing for better support in model development and safety improvements. Users who choose not to participate will continue under the existing 30-day data retention policy.
The company ensures that user privacy is protected through a combination of tools and automated processes to filter or obfuscate sensitive data. Importantly, Anthropic does not sell user data to third parties.
Existing users have until October 8, 2025, to accept the updated terms and make their preferences known. The new policies will become effective immediately upon acceptance and apply only to new or resumed interactions with Claude. Users can modify their privacy settings at any time through Anthropic's platform.
For further information on the updates, users are encouraged to visit the FAQ section provided by Anthropic.
Image source: Shutterstock- ai
- privacy policy
- anthropic