{t('updated')}
These Safety Standards describe the rules and enforcement practices that apply to use of the Divine™ app (the “Service”). The Service is provided by Verse Communications PBC dba Divine (“Divine”). These standards are incorporated into and form part of the Terms of Service.
Divine enforces these standards within Divine-controlled infrastructure. Because the Service operates on decentralized systems, enforcement actions may not affect content or activity outside Divine-operated interfaces.
“Divine-controlled infrastructure” means websites, apps, relays, media storage, APIs, and other systems owned or controlled by Divine.
“Externally hosted content” means any video, image, audio file, or other media referenced within a Nostr event that is stored on third-party infrastructure.
“Nostr” means the decentralized protocol known as “Notes and Other Stuff Transmitted by Relays,” which enables users to publish, retrieve, and verify content using cryptographic keypairs and signed messages.
“Nostr event” means a cryptographically signed data object published using the Nostr protocol. A Nostr event may include a public key, timestamp, event kind, content, tags, metadata, and a digital signature. Once broadcast, Nostr events may be stored or replicated across multiple independent relays, including relays Divine does not own or control.
“Nostr keypair” means the cryptographic public/private keypair used to create, sign, and authenticate Nostr events. The public key functions as a user identifier across the Nostr protocol, and the private key is required to sign events and prove authorship.
“Relay” means any independently operated server implementing the Nostr protocol for receiving, storing, indexing, or redistributing Nostr events. Unless expressly stated otherwise, relays are not operated by Divine.
“User content” means any Nostr event, media, metadata, username, profile information, or other material submitted, published, uploaded, linked, or otherwise made available by a user through the Service, whether hosted by Divine or externally.
Users may not use the Service to create, publish, distribute, or facilitate content or activity that is unlawful, harmful, violates applicable law, or violates the Terms of Service.
This includes, without limitation, content involving child sexual abuse or exploitation, harassment or abuse, credible threats of violence, non-consensual intimate imagery, doxxing or disclosure of private information, scams or fraud, malware, impersonation, or deceptive conduct.
Users may not use synthetic, manipulated, or AI-generated media in a manner that deceives, impersonates, exploits, or materially misleads others.
Divine maintains a zero-tolerance policy for child sexual abuse material and related exploitation. Divine may remove such content from Divine-controlled infrastructure, restrict access, preserve evidence, and report to appropriate authorities where required or appropriate.
Certain mature content may be restricted, hidden by default, or made available only to users who meet applicable age requirements. Users are responsible for complying with all age restrictions.
Users may not interfere with the operation or integrity of the Service, including by circumventing safeguards, engaging in coordinated manipulation, or using automation or bulk activity in ways that degrade the Service or harm users.
Divine may take action in response to violations of these Safety Standards, including removing or limiting content within Divine-controlled infrastructure, restricting or suspending access, disabling features, or taking other actions reasonably necessary to protect users and the Service. Actions may include limiting visibility or access to content within Divine-controlled interfaces without removing the underlying content from decentralized networks.
In cases involving imminent harm, credible threats, or illegal activity, Divine may take immediate action, including removal of content and referral to appropriate authorities.
Divine may also cooperate with law enforcement or other authorities where required or appropriate.
Divine may use automated tools, third-party systems, human review, and other methods to identify and address violations. These systems are not perfect, and Divine does not guarantee that all violations will be detected or prevented. Divine has no general obligation to monitor all content or proactively detect all violations.
Users may report content or accounts through tools made available in the Service. Divine aims to review reports of objectionable content promptly and generally within 24 hours, although response times may vary based on volume, severity, and available information.
The Service may provide tools such as blocking, muting, filtering, and moderation lists that allow users to manage their experience. These controls apply within Divine-operated interfaces as implemented by Divine.
Enforcement actions apply only within Divine-controlled infrastructure and do not extend to independent relays, clients, or third-party systems. Because the Service interacts with decentralized systems, content or accounts removed or restricted within Divine-controlled infrastructure may remain visible or accessible through other clients, relays, or third-party systems. Divine does not control how independent operators handle content.
Divine does not engage in automated decision-making or profiling that produces legal or similarly significant effects on users within the meaning of applicable data protection law.
Divine may, but is not obligated to, review requests to reconsider moderation decisions.
These Safety Standards may be updated from time to time. When we do, we will post the updated version and revise the “Last Updated” date above. Unless a different effective date is stated, changes will become effective 30 days after posting. Your continued use of the Service after the effective date of the updated Safety Standards constitutes your acceptance of them.