https://preview.redd.it/al1gy9t9v9q51.png?width=424&format=png&auto=webp&s=b29a60402d30576a4fd95f592b392fae202026ca Hopefully any questions you have will be answered by the resources below, but if you have additional questions feel free to ask them in the comments. If you're quite technically-minded, the Zano whitepaper gives a thorough overview of Zano's design and its main features. So, what is Zano? In brief, Zano is a project started by the original developers of CryptoNote. Coins with market caps totalling well over a billion dollars (Monero, Haven, Loki and countless others) run upon the codebase they created. Zano is a continuation of their efforts to create the "perfect money", and brings a wealth of enhancements to their original CryptoNote code. Development happens at a lightning pace, as the Github activity shows, but Zano is still very much a work-in-progress. Let's cut right to it: Here's why you should pay attention to Zano over the next 12-18 months. Quoting from a recent update:
Anton Sokolov has recently joined the Zano team. ... For the last months Anton has been working on theoretical work dedicated to log-size ring signatures. These signatures theoretically allows for a logarithmic relationship between the number of decoys and the size/performance of transactions. This means that we can set mixins at a level from up to 1000, keeping the reasonable size and processing speed of transactions. This will take Zano’s privacy to a whole new level, and we believe this technology will turn out to be groundbreaking!
If successful, this scheme will make Zano the most private, powerful and performant CryptoNote implementation on the planet. Bar none. A quantum leap in privacy with a minimal increase in resource usage. And if there's one team capable of pulling it off, it's this one.
What else makes Zano special?
You mean aside from having "the Godfather of CryptoNote" as the project lead? ;) Actually, the calibre of the developers/researchers at Zano probably is the project's single greatest strength. Drawing on years of experience, they've made careful design choices, optimizing performance with an asynchronous core architecture, and flexibility and extensibility with a modular code structure. This means that the developers are able to build and iterate fast, refining features and adding new ones at a rate that makes bigger and better-funded teams look sluggish at best. Zano also has some unique features that set it apart from similar projects: Privacy Firstly, if you're familiar with CryptoNote you won't be surprised that Zano transactions are private. The perfect money is fungible, and therefore must be untraceable. Bitcoin, for the most part, does little to hide your transaction data from unscrupulous observers. With Zano, privacy is the default. The untraceability and unlinkability of Zano transactions come from its use of ring signatures and stealth addresses. What this means is that no outside observer is able to tell if two transactions were sent to the same address, and for each transaction there is a set of possible senders that make it impossible to determine who the real sender is. Hybrid PoW-PoS consensus mechanism Zano achieves an optimal level of security by utilizing both Proof of Work and Proof of Stake for consensus. By combining the two systems, it mitigates their individual vulnerabilities (see 51% attack and "nothing at stake" problem). For an attack on Zano to have even a remote chance of success the attacker would have to obtain not only a majority of hashing power, but also a majority of the coins involved in staking. The system and its design considerations are discussed at length in the whitepaper. Aliases Here's a stealth address: ZxDdULdxC7NRFYhCGdxkcTZoEGQoqvbZqcDHj5a7Gad8Y8wZKAGZZmVCUf9AvSPNMK68L8r8JfAfxP4z1GcFQVCS2Jb9wVzoe. I have a hard enough time remembering my phone number. Fortunately, Zano has an alias system that lets you register an address to a human-readable name. (@orsonj if you want to anonymously buy me a coffee) Multisig Multisignature (multisig) refers to requiring multiple keys to authorize a Zano transaction. It has a number of applications, such as dividing up responsibility for a single Zano wallet among multiple parties, or creating backups where loss of a single seed doesn't lead to loss of the wallet. Multisig and escrow are key components of the planned Decentralized Marketplace (see below), so consideration was given to each of them from the design stages. Thus Zano's multisig, rather than being tagged on at the wallet-level as an afterthought, is part of its its core architecture being incorporated at the protocol level. This base-layer integration means months won't be spent in the future on complicated refactoring efforts in order to integrate multisig into a codebase that wasn't designed for it. Plus, it makes it far easier for third-party developers to include multisig (implemented correctly) in any Zano wallets and applications they create in the future. (Double Deposit MAD) Escrow With Zano's escrow service you can create fully customizable p2p contracts that are designed to, once signed by participants, enforce adherence to their conditions in such a way that no trusted third-party escrow agent is required. https://preview.redd.it/jp4oghyhv9q51.png?width=1762&format=png&auto=webp&s=12a1e76f76f902ed328886283050e416db3838a5 The Particl project, aside from a couple of minor differences, uses an escrow scheme that works the same way, so I've borrowed the term they coined ("Double Deposit MAD Escrow") as I think it describes the scheme perfectly. The system requires participants to make additional deposits, which they will forfeit if there is any attempt to act in a way that breaches the terms of the contract. Full details can be found in the Escrow section of the whitepaper. The usefulness of multisig and the escrow system may not seem obvious at first, but as mentioned before they'll form the backbone of Zano's Decentralized Marketplace service (described in the next section).
What does the future hold for Zano?
The planned upgrade to Zano's privacy, mentioned at the start, is obviously one of the most exciting things the team is working on, but it's not the only thing. Zano Roadmap Decentralized Marketplace From the beginning, the Zano team's goal has been to create the perfect money. And money can't just be some vehicle for speculative investment, money must be used. To that end, the team have created a set of tools to make it as simple as possible for Zano to be integrated into eCommerce platforms. Zano's API’s and plugins are easy to use, allowing even those with very little coding experience to use them in their E-commerce-related ventures. The culmination of this effort will be a full Decentralized Anonymous Marketplace built on top of the Zano blockchain. Rather than being accessed via the wallet, it will act more as a service - Marketplace as a Service (MAAS) - for anyone who wishes to use it. The inclusion of a simple "snippet" of code into a website is all that's needed to become part a global decentralized, trustless and private E-commerce network. Atomic Swaps Just as Zano's marketplace will allow you to transact without needing to trust your counterparty, atomic swaps will let you to easily convert between Zano and other cyryptocurrencies without having to trust a third-party service such as a centralized exchange. On top of that, it will also lead to the way to Zano's inclusion in the many decentralized exchange (DEX) services that have emerged in recent years.
Where can I buy Zano?
Zano's currently listed on the following exchanges: https://coinmarketcap.com/currencies/zano/markets/ It goes without saying, neither I nor the Zano team work for any of the exchanges or can vouch for their reliability. Use at your own risk and never leave coins on a centralized exchange for longer than necessary. Your keys, your coins! If you have any old graphics cards lying around(both AMD & NVIDIA), then Zano is also mineable through its unique ProgPowZ algorithm. Here's a guide on how to get started. Once you have some Zano, you can safely store it in one of the desktop or mobile wallets (available for all major platforms).
How can I support Zano?
Zano has no marketing department, which is why this post has been written by some guy and not the "Chief Growth Engineer @ Zano Enterprises". The hard part is already done: there's a team of world class developers and researchers gathered here. But, at least at the current prices, the team's funds are enough to cover the cost of development and little more. So the job of publicizing the project falls to the community. If you have any experience in community building/growth hacking at another cryptocurrency or open source project, or if you're a Zano holder who would like to ensure the project's long-term success by helping to spread the word, then send me a pm. We need to get organized. Researchers and developers are also very welcome. Working at the cutting edge of mathematics and cryptography means Zano provides challenging and rewarding work for anyone in those fields. Please contact the project's Community Manager u/Jed_T if you're interested in joining the team. Social Links: Twitter Discord Server Telegram Group Medium blog I'll do my best to keep this post accurate and up to date. Message me please with any suggested improvements and leave any questions you have below. Welcome to the Zano community and the new decentralizedprivateeconomy!
Part 1Part 2 I'm at $BigClient, which is taking a Citroen like approach to infrastructure and operations. "We recognize that the McPherson strut is simple, efficient, good enough for most use cases and accepted by everyone in the industry, but we shall do it with hydraulic fluid at high pressure. What could go wrong?" Except $BigClient's far away from a competent Citroen shop. $BigClient's Citroen has gone through a few years of 'just keep it running on the cheap' upkeep without access to factory parts. I've got an odd patching problem on a handful of servers. Systems are rolling back to insecure versions (2.0.2 ->1.4.6) and nobody knows why. Or at least, nobody's talking. I don't know what to do yet, so I decide to go and get lunch. I work out the possibilities.
There's something wrong with our validation procedure- they're actually patched and we're reading the wrong thing.
There's something or someone else downgrading these systems.
Number 1 requires more documentation, which $BC doesn't seem to want to show me. Number two might be hiding in logs, which are emailed to me on a regular basis. I walk back to my cubicle, grab my laptop and a notebook and find a quiet corner to figure things out. I find one in a tiny conference room. I read through my emails and search for any of the logs from the api servers. I spend about ten minutes on Stack Exchange for the appropriate sed, awk, tee and cat munging to pare them down to what I want. Eventually I dump them all to Excel, because I am a bad person. Some filtering and I can see what's going on. The system orchestration updates each server every other midnight. I see about three quarters of them download the 2.0.2 version as a part of the night's update. Every two nights a (seemingly) random selection of servers updates. I scribble the order on the conference room whiteboard and stare at them for a few minutes. Nothing in the orchestration system logs shows another process loading the older 1.4.6. version. But something is. Nothing in the logs emailed to me obviously points to another process. I take a walk to get a coffee and think. Nothing comes to me and I have to scour the kitchen for unflavored coffee. I walk back to my conference room to find an intern-like person. me:"Hey, I apologize. I didn't know the room was reserved. I'll take my stuff." Other person:"That's ok. Are you Rob?" me:"Nope, sorry" I take my stuff and make my way back to my cubicle. A few minutes searching leads me to a shared root password for the servers stored in the password vault. I login to one of the remaining servers running 2.0.2 and look at the running processes. Nothing obvious like "random updater". I'm stumped. I lean back and stare at nothing in particular trying to come up with some ideas. Unfortunately, it's fairly packed and I'm next to a bullpen. Voice 1:"So the Sky Caps put blotter in the vat without telling anyone" Voice 2:"Hilton Honors kicks' Marriott Bonvoy's ass any day." Voice 3:"No, I'll pick her up at 4" The voices wash over me in some clip reel workplace sitcom haze. I'm not going to get anything done. I take a walk around the offices to get the lay of the land. It's a Hanna-Barbera cartoon of grey cubefarms, tan breakrooms, free coffee but no snacks. The only attempts at color are people's cubicles. Family pictures, shirtless men with fish, desk toys and action figures. It's like a mall- everything's pleasant, non threatening and in identically-sized stalls, with colorful (but bounded) individuality, all for commerce. Then I find the Hot Topic meets Successories manifesting in a cubicle. There are two dorm-room sized posters of the gold Bitcoin-coin, along with framed inspirational quotes about success and perserverance set against pictures of Game Of Thrones characters and muscle-bound men in insignia-less camo. A new leather jacket with an embroidered skull is on the back of the chair. This person is either a hoot or insufferable. I keep walking. I have a breakthrough. Where are the API servers getting the older version to install? Maybe that'll lead me into the library. I'm not yet Adso, but perhaps I'm one of the other ,lesser scribes copying my book and scribbling fanciful drawings of the things I miss, like decent coffee and a cell-mate that doesn't snore. I walk back to my cubicle. A different intern-shaped person is in the conference room, all alone. I can't save them. Eventually they'll be standing in the corner of their cubicle looking away while the middle manager cleans out the rest of their team. I'm in my seat. Some searching results in a few possible repositories. Some more searching finds me the one repo that still has v1.4.6 of this application. Just to make sure, I compare a downloaded copy of v1.4.6 and the installed version of v 1.4.6 on one of the servers. I search all the folders and files for the URL of the repo server and find it. In the application itself. The server waits every two days and looks to the repo. If the installed version is not equal to v 1.4.6, it downloads v 1.4.6 from the server and installs it, then forces a restart. This code is commented out (made non-executable) along with an actual comment: /REMOVE BEFORE PRODUCTION I quickly scan through the API servers to find one of the ones still running 2.0.2. I search for the term "REMOVE BEFORE PRODUCTION" And there it is, in the application code. Except it's not commented out. In a text editor, I write up my findings, conclusion and a recommended fix- delete the upgrade code snippet, increment to 2.0.3, push it out using the orchestration tool and call it a day. LC Chat won't let me attach my text file, so I breathlessly LC Chat my document, line by line at Vincent, the poor bastard tasked with closing audit finding 162, the mystery of the random rollback. Vincent:... Clearly, Vincent is choosing his congratulatory language carefully. Vincent:"Can't apply the fix. The application is owned by Development. They're behind on other things, so they won't update the software until next quarter." me:"It's about thirty lines of code we can comment out" Vincent:"Can we say it's fixed for the audit since we know what the problem is?" me:"No. We can patch it, or we could write up a remediation plan and get it on some schedule." me:"But that's more paperwork than the actual fix." Vincent:"But Ops isn't on good terms with Development." me:"So they're not going to touch it any time soon." Vincent:"Probably not" me:You guys own that repo server, too" Vincent:"I don't see how that's good for anything" me:"We cut out the update code in 2.0.2 and call it 2.0.3. We name the file 1.4.6 and replace the existing 1.4.6 on the repo server. Either the app gets updated via your orchestration server or it updates itself. We're fixed in two days either way. Vincent:"But policy requires that we get approval" me:"There's an exception, if you have a superior in Operations to sign off, you can call it an emergency fix. Ask Trevor. He just needs to not tell anyone else. You submit the ticket and eventually the devs will get to it and fix the problem for good. Until then, you pass that part of the audit." Vincent tells me he's going to talk to Trevor. I'm going to take a walk. Out of curiosity, I go back to the Hot Topic cubicle to get a look at its occupant. The jacket is gone and the monitors are off. Mystery person has left for the day, I assume. I look at the large jars of nutritional supplements with macho names- Gorilla Rage, LumberJacked, Psycho Focus". I notice the name-plate on the outside of the cubicle. Oh, no. Ian. To Be Continued... edit- made modifications to satisfy Internal Audit 8-)
Hi! I'm the solo developer of a social media platform, new religious movement, and religious organization. I've been working on this since early 2019. I've only told a few dozen people about it and I'm looking for feedback. Please ask me anything. Thanks!
A lot of this text was originally to be posted to changemyview because I'm looking to be challenged on my assumptions here, and my approach. I checked with the mods there and I can't find a way not to post it as promotion (which would violate the subreddit rules). The I considered posting it in philosophy and IAmA and it was a similar answer. To be clear: I support their decisions to not allow this post. There's a lot of reasons to be concerned about someone running up and saying "Hey! Do you wanna hear about my religion?" and I have total respect for anyone who says "No." I reached out to casualiama a few hours ago and haven't heard back. If they decide to lock or remove this I support their decision and will try another subreddit. I welcome lots and lots of questions about this project, I believe this project will be helpful, but I am open to having my view about the viability of this project changed. This is a big weird thing and I've been focusing on it alone for long time. This work has probably had a big impact on my objectivity. This didn't used to exist and maybe it doesn't need to. Or maybe it has some irredeemable flaw that I've failed to consider. I don't want to see this get launched if that's the case. The main cost of this project has been my time, and I've learned enough while doing it to justify the time spent on it even if I'm convinced this project isn't viable. I can't make any more progress alone and I need new people to get involved. This can change and it can evolve as it grows because I've tried to build it to be guided by new people. I hope that I've done a good job of building it that way. The fastest way to learn about this in order to ask me questions is by visiting the website. https://churchofearth.org/ (I sure hope that this post does well and I've set up the site well enough to accommodate the traffic.) (This text will contain many links to the website) (I'm really nervous about this)
Why have I done this?
I would describe this effort as coming out of the part of my brain that would be firing if I were an active survivalist/doomsday prepper. In 2019 I became very concerned about the state of the world, especially as I thought about what it will look like for the remainder of my life. It seemed logical to amass skills and resources in order to protect myself and the people around me. I had never focused on my own survival because I've lived a life of privilege, and then I realized that there were scenarios where my privilege would not protect me. During that research phase I came to the conclusion that I couldn't just turn inward. As people turn inward our shared challenges become harder to address. I started looking more into the good work people were doing all over the world. I looked outward for inspiration. There's lots of folks doing lots of amazing work trying to solve our shared challenges. At every level of society, in every region, in big ways and small ways. This is often a very helpful world. I started researching things and became less pessimistic about the future of our world. This doesn't mean there aren't a pile of challenges for us to solve, it means that I'm better able to see the people trying to solve our shared challenges. When I couldn't see them (and wasn't looking for them) the future of our world seemed much worse. I still see the darkness, and it's scary and overwhelming and huge, but now I see more light. This project represents what I was able to build with the skills and experience I had. This project started out as just a social media platform intended to connect people who wanted to help. While I was building that I discovered a faith within myself about the future. The new religious movement is the structure I've built to help grow this faith. It's based on religious humanism, which has been a thing for several hundred years. I stopped working heavily on this project in late 2019, and formally stopped on February 1st of 2020. I started it up again in April of 2020. There's a short post about it here.
Help others and you'll live in a more helpful world.
Improve yourself and you'll improve the world.
Consume less and you'll value what you have more.
People who feel the faith of the Church of Earth are called Helpers. There are no rituals to join this faith, although anyone can make rituals. Anyone can identify themselves as a Helper. Don't judge them by who they say they are, judge them by what they do. In their acts of faith they should be trying to help you, and only if you consent to their help. This faith is optional. No one can be forced to adopt this faith. You can't force a more helpful world on people, you can only help them build it.
The social media platform
The social media platform is built to help people grow this religion. It allows them to focus on activities that relate to the core beliefs of the Church of Earth. People do not need to share this faith to use this platform or use its features. The platform is anonymous. There are no user profiles. Every type of module can inspire almost any other type of module. The modules all have different functionality. This platform is complex software. Social Media Platforms like Facebook and Twitter are designed to be simple and addictive. Those are deliberate design choices intended to serve the goals of the organizations that built them. These goals relate to revenue generation through the use of user data. This organization has different goals, and one of those goals involves promoting hard work. There will be a learning curve with this platform. I'm the only one who has used it so far and I've probably failed to consider a lot of things. This platform will change over time as more skills and knowledge improve its construction. This platform can be considered one of our great works. I can tell you right now people will find a number of bugs on the platform. I am expecting something unexpected, critical, and stressful to happen once people start visiting this site. Please be kind and patient with this platform, and with me. I'm very tired. Upvotes/Downvites/Likes/Dislikes: The purpose of this platform is to help grow a more helpful world. The upvote/downvote button labels are: This is Helpful and This is Not Helpful. Everything added to this platform should be helpful, every post includes fields where you can describe who you're trying to help, and why. There are a limited number of votes that can be given every day. Creating content increases your votes for the day. The platform has the following modules:
Scriptures - journaling functionality intended to allow you to explore a subject. This could be a journal entry about your personal development, or a short story, or some introspection, or anything helpful you can possible think of that would be a lot of text. - Infographic
Sermons - Brief text snippets or links to external resources intended to inspire other users. This module is how we link to other sites, share other content, or say short messages.
Goals - Things you want to get done. You can add to the organizations goals, and the goals for the website. We can all work together. - Infographic
Prayers - Requests for help. If you need something you can ask the platform and hopefully someone will see it and help you. You can list the type of help you need. It's up to you to be realistic about what can be done on this platform. If your prayers aren't getting answered there are other tools to help you solve some problems. - Infographic
Rituals - Things you do alone or in groups to help you accomplish tasks or focus yourselves. Our species uses rituals to greet and communicate with each other. We have rituals that are events, or meals. We have rituals that involve many people singing or performing an action. The Church of Earth has not created any official rituals.
Missions - Events intended to bring people together to offer help in their communities. These can have rituals attached to them so that people know what social structure is expected when they meet.
The Gospel of the Church of Earth All of the content we create can be connected in a big, anonymous web of helpfulness. Any content type can inspire any other content type, except for prayers. Prayers can only be answered (publicly or privately depending on the type of prayer.) Blog posts by the Church of Earth can inspire content from users. The goals of the Church of Earth appear in the same module as the goals of individual users of this platform. Intentions When you post something you have to say who you think you're helping, and why. You also need to select a subcategory like "I am being thoughtful" or "I am being creative" or "I am being inspirational" or "I am being critical". This will help other users understand how to react to you in a way that contributes to meaningful interactions. It's just me right now, so that's pretty funny and awkward The platform is live. Anyone can create content. Accounts can only create one piece of content until their first contribution receives a vote by another user saying "This is Helpful". There are currently no users on the platform. I've created content but there's no one to say if it's helpful so I'm still a pending user. On September 24th, 1 year after I first posted the 3 core beliefs to the site, 5% of all pending users will be selected at random and added to the site. Those people will then be able to use the site, but their profiles will not indicate they were among the First Five. My account isn't any more likely to be added than anyone else. This is an act of faith. It doesn't take a lot of people to make big things happen and we're all capable of amazing things. That is the message this platform has been designed to share. Note: That doesn't mean there won't be an effort to understand the impact of spam comments, or content that violates the terms of service for this site. They'll be transparent though. It's unreasonable to think that people won't do silly things with a site like this on the internet. Silly things are fun. One of the "intentions" of the site is "the be creative".
The religious organization
This organization will have a different look depending on how successful it is. I've built it in the hope it will see a meaningful amount of adoption, but all of that can be scaled down and still be helpful. Even if it's just me feeling this way for the rest of my life I consider it a meaningful improvement to my worldview and personality. I'm fairly confident some unknown quantity of other people will also see the value in this and that we can build something together. The Church of Earth organization is the least planned out because it needs to be developed by people with greater specialization than me. The main points are this:
Governance will be handled democratically, through the Church of Earth platform, and it will include a random selection of top candidates. (This functionality hasn't been built yet)
There will be term limits for everyone. Especially me.
Governance of the organization will include people who do not identify as Helpers.
Compensation of employees will be nonstandard as well. Employees will be given baseline salaries for the region they live in, regardless of their role, and everyone will qualify for bonuses that will be given out by other employees.
Here's an infographic about this. The organization will be structured like a nonprofit and will prioritize pursuing goals that align with the 3 core beliefs of the Church of Earth faith. It will require donations, but the hope is that it will eventually develop revenue to be self sufficient. I have some ideas for revenue generation. They're not super innovative, they're just taking relevant things and doing them in a way that promotes the faith of the Church of Earth as it operates as a nonprofit entity in a capitalist economy. This business model is well developed in North America by other religious institutions. These are the best ideas I've come up with, but none of them can proceed until I find people more qualified than me to do this. It's very likely that I've failed to consider elements of these proposals.
Education: It should be possible to teach many subjects from a religiously helpful perspective. Helping people is a secular act, but the help at the core of the faith of the Church of Earth is not. On top of teaching specific programs from a helpful perspective, there could also be a seminary school where Preachers are trained.
Property Management: The Church of Earth can own property similarly to other non-profits or religious organizations. They can be governed in transparent and helpful ways. This can be as simple as free wifi, or as complex as community gardens and shared maintenance duties in order to build and maintain a small regional community.
The website can be modified to enable a lot of this stuff, including a transparent overview of building finances, public space scheduling and communications. Once we build a template for a type of property and its amenities, we can look for other properties to offer a similar experiences.
The physical structure that represents the Church of Earth in a region is called a Community Guild. These can have rental spaces similar to other community centers or churches, they'll also be hubs for community activity facilitated through the website.
Cryptocurrency: Because the amount of activity that can be accomplished on the platform per day is limited, we can create and assign a cryptocurrency token like Bitcoin whenever someone says "This is Helpful" and maintain a digital economy. Users of the platform can be paid to be helpful, and possibly engage in their faith if that applies to them.
This is by no means developed. There are a lot of hurdles to overcome before this can happen. I am very aware of the fact that what I'm proposing here is that a religious organization can create its own internal economy.
What should you know about me?
I wish I could be like Satoshi Nakamoto. That's the pseudonym of the person who created Bitcoin. We don't know for sure who actually created the code. They just worked on it, put it out there, and it went on to become something amazing and complex. I spent a long time trying to figure out how to do that with the Church of Earth. I absolutely love everything about this project but I want to minimize my involvement in it. I also think there will be negative impacts on me, and this organization, if people focus on me instead of the faith. The Church of Earth is about what everyone can do, together. I'm no more special than anyone else. I'm terribly imperfect. I'm also a 39 year old white Canadian guy. While I've faced many struggles I've lived a life of privilege. I don't speak for everyone, or to everyone. No one can. People face problems I couldn't even consider. My ability to help those people directly is limited, but I have the skillset required to build something that would allow others to help them. The world is hard, and harsh, and frustrating, and scary. It's also wonderful and amazing and beautiful. I think that if we all work together we can spend the rest of our lives deliberately building a better world for everyone, in big ways and small ways. Knowing that's happening makes me happy and it motivates me to work harder. I wasn't a religious person until I discovered this faith within myself. This faith has allowed me to focus myself towards something that I believe is helpful to my community. I think it can do the same for others. What do you think? Thanks for reading. Ask me anything, please.
The team’s overall technical background is good, and the CTO and CEO of the project have rich experience in related industries;
The current business scope of CoinEx has been expanded, and the development of the public chain has a decisive role in promoting the development of the exchange business;
The project operation information is transparent, and the development process is consistent with the road map;
The unlocking schedule is clear, and the token held by the team will be unlocked continuously in the next five years;
The project uses POS consensus mechanism. At present, it has been launched on the main network, and the block time is stable, between 2–3 seconds.
It is not clear enough yet whether the trichain operation planning can achieve the project’s development goals;
There is limited information on implementation details about cross-chain and other related technologies, and the development status needs to be assessed based on the later project development disclosure information;
The team currently hold a large share of the token, hence the distribution of tokens is relatively concentrated;
There are few application scenarios for project tokens, and more ecosystem scenarios need to be developed;
As a deflationary token, CET needs to be balanced by dealing with the contradiction between public chain users and token holders.
The development of CoinEx Chain contributes to the future development of CoinEx’s centralized and decentralized exchanges; the concept of trichain operation simplifies the functions of each chain, improving their performance. At present, there are few exchanges working on the public chain, and no fierce competition has occurred.
Considering the status and development prospects of the project, TokenInsight gives CoinEx a rating of BB with a stable outlook.
1. Multidimensional evaluation
2. Project analysis
CoinEx (CoinEx Technology Limited) was established in December 2017 and is headquartered in Hong Kong, China. It is a sub-brand of the ViaBTC mining pool. At present, CoinEx’s business scope includes CoinEx exchange, CoinEx public chain, and CoinEx decentralized exchange. The current development focus of the CoinEx platform are public chain and exchange. The main purpose of the public chain is to build a decentralized exchange (DEX) infrastructure and an ecosystem around DEX. CoinEx business structure，Source: CoinEx; TokenInsight
“ CoinEx Chain uses the parallel operation of three chains which are DEX, Smart, and Privacy, as well as cross-chain technologies to create a rich decentralized exchange ecosystem and blockchain financial infrastructure. The core of CoinEx’s early business was the exchange, consisted of two major categories which were spot and derivatives trading. Currently, there are 123 trading currencies online, covering 302 trading pairs. On June 28, 2019, CoinEx released the CoinEx Chain public chain white paper, aiming to build a decentralized trading system (CoinEx DEX) with community-based operations and transparent transaction rules, and providing user-controlled asset trading scenario by the highest technical standards in the industry; CoinEx Chain has become another development focus of CoinEx. CoinEx Token (CET), which was originally a native token of the CoinEx exchange, will also be developed mainly as a built-in token of the public chain. CoinEx Chain is a public chain based on the Tendermint consensus protocol and Cosmos SDK, and it uses POS mechanism. CoinEx Chain plans to support 42 nodes when the project starts, and any entity in the ecosystem can participate in the validator’s campaign by staking CET. CoinEx Chain will use the new block reward and the transaction fee contained in the block as the reward for running the node. CoinEx Chain has developed three public chains with different positioning and different functions in order to meet the needs of blockchain transactions for transaction performance, smart contracts, and privacy protection at the same time. They operate in parallel and collaborate with each other through cross-chain technology. At present, the block time of the public chain is between 2–3 seconds. According to the observation of TokenInsight, the block time is stable, but the number of transactions through the CoinEx public chain is still low at present, the number of transactions in 24 hours is about 30,000; The TPS on public chain disclosed by CoinEx can reach up to 1500 per second. CoinEx Chain uses a trichain parallel model to build a more vibrant ecosystem around DEX. The three chains are DEX public chain, Smart public chain, and Privacy public chain, respectively responsible for decentralized transactions, smart contracts, and on-chain privacy protection. CETs that need to participate in complex financial contracts can be transferred to the Smart public chain through the DEX public chain, then moved back to the DEX public chain after that. CET tokens that need to participate in token confusion can also be carried out through the privacy transaction of the Privacy public chain, and can eventually be returned to the DEX public chain. The three public chains are responsible for their respective duties, and they are interconnected through the cross-chain technology through the relay mechanism. In addition to ensuring their respective transaction processing speed and functional attributes, they can also jointly provide richer and safer functions, and synergistically constitute the CoinEx decentralized public chain ecosystem. In addition, CoinEx Chain also supports any participant to issue new tokens on the chain and create new trading pairs for the issued tokens. CoinEx Chain guarantees the circulation of new tokens by establishing a trading pair between the new token and CET.
2.2 Component architecture
“ Tendermint Core and Cosmos SDK have improved the performance and operation capability of the blockchain. The SDK packaging reduces the consideration of non-related logic, hence reducing the development complexity. CoinEx Chain is based on Tendermint Core and Cosmos SDK, both of which have brought a big boost to the development of CoinEx public chain performance. Cosmos-SDK will implement the application logic of the blockchain. Together with the Tendermint consensus engine, it implements the three-layer architecture of the CoinEx public chain: the application layer, the consensus layer, and the network layer. Tendermint Tendermint is based on the state machine replication technology and is suitable for blockchain ledger storage. It is a list of transactions making consensus with Byzantine fault tolerance, the transactions are executed in the same order, and eventually the same state is obtained. Tendermint can be used to build various distributed applications. Cosmos SDK Cosmos-SDK is a blockchain framework that supports the construction of multiple assets with a consensus mechanism of POS (Proof of Stake) or POA (Proof of Authority). The goal of the Cosmos SDK is to allow developers to easily build custom blockchains from 0, while enabling the interaction with other blockchains. Cosmos-SDK is a blockchain framework that supports the construction of multiple assets with a consensus mechanism of POS (Proof of Stake) or POA (Proof of Authority). The goal of the Cosmos SDK is to allow developers to easily build custom blockchains from 0, while enabling the interaction with other blockchains. The blockchain development framework Cosmos SDK implements general functions such as account management, community governance, and staking in a modular form. Therefore, using the Cosmos SDK to build a public chain can simplify development procedures and facilitate operation. Tendermint is a fixed protocol in a partially synchronized environment, which can achieve throughput within a delay range of the network and each process itself. The CoinEx public chain is developed based on both, improving the performance and operability of the blockchain. The SDK packaging further reduces considerations of non-related logic and reduces the complexity of developers creating. The two components of Tendermint and Cosmos SDK are connected and interacted through the Application Blockchain Interface. Cosmos SDK and Tendermint interworking structure，Source:CoinEx; TokenInsight
2.3 Project public chain planning
The development plan of the CoinEx public chain is to create a series of public chains with specific application directions, including:
DEX public chain: solve the problems of lack of security and opacity that are widely criticized by centralized exchanges at present; aim to build a transparent, safe, and permission-free financial platform; restore the experience of central exchanges to the greatest extent；
Smart public chain: a public chain that specifically supports smart contracts and provides a platform for building complex financial applications;
Privacy public chain: mainly provides transaction amount, account balance, and information protection and the hiding of both parties to the transaction.
In order to achieve the performance of each specific application public chain, each public chain in the CoinEx public chain focuses on the development of a certain function. For example, in order to improve the transaction processing speed of the DEX public chain, the DEX public chain only supports the necessary functions and does not support smart contracts. To achieve the smart contract function support, cross-chain connection between the DEX public chain and the Smart public chain is required.
2.4 Operation analysis
“ The CoinEx platform publishes monthly ecosystem reports with high transparency; but the monthly reports are limited to contents about transactions and development, and lack progress in ecosystem and community construction, making them relatively simple. 2.4.1 Disclosure of ecosystem information Operational risks have a direct impact on platform users. Whether platform operations are smooth and whether there is transparency are issues that platform users care about. The CoinEx platform was established in 2017 and has around 3 years of development. It is also one of the platforms that has been developing for a long time in the exchange industry. It has obtained a digital currency trading license issued by the Estonian Financial Intelligence Unit (FIU), and the platform’s compliance is guaranteed to some degree. The actual operation of the CoinEx platform will be displayed in the form of ecosystem monthly reports. The monthly report contains various types of content such as online currencies, new activities, plans for the next month, and ecosystem dynamics. It involves multiple business dimensions including the CoinEx exchange, CoinEx Public Chain, and CET token. https://preview.redd.it/4mt0999ere551.png?width=631&format=png&auto=webp&s=cba27a7c90275f4c033bdd2445a72e6f294265e8 Snippet of a CoinEx ecosystem monthly report，Source: CoinEx; TokenInsight 2.4.2 Roadmap CoinEx Chain released its development roadmap for the four quarters of 2020 in January 2020. The roadmap shows that CoinEx Chain will undergo major updates on smart contracts and DEX hard fork upgrades. The project roadmap is basically planned on a monthly basis, with a clear plan and a clear direction of development. CoinEx Public Chain 2020 Development Roadmap，Source: CoinEx; TokenInsight In addition to the development route planned in the roadmap, CoinEx public chain also discloses its goals for next month in its monthly ecological report. The project’s main net was launched online in November 2019. According to TokenInsight’s review of the development of CoinEx public chain from January to April and the disclosure of the project’s ecosystem monthly report, the project’s plan about development of the smart contract Demo in February failed to be completed as planned; the project completed launching of the new version of the blockchain browser and the Asian Atlantis upgrade; the smart contract virtual machine development was planned to be completed in April, but the progress related to supporting cross-chain agreements was not disclosed yet. Overall, the project’s development route planning is clear, and the project’s development schedule is consistent with the plan, but there are still some discrepancies. Operation and development information is disclosed every month, and information transparency is high.
3. Industry & Competitors
The earliest origin of the exchange layout in the public chain field began in early 2018 when Binance released an announcement to start the development of the Binance Public Chain officially. In June of the same year, Huobi announced at its brand upgrade conference that it will combine the technical capabilities of the Huobi technical team and the community developers to develop the Huobi public chain called “Huobi Chain”. In December of the same year, OK Group announced the launch of its self-developed public chain OKchain, dedicating to provide underlying technical support and services for startups stationed in B-Labs. The successful launch of the public chain brings huge strategic significance to the exchange, which can not only improve the performance of the existing business of the exchange but also achieve further expansion of its influence. As one of the most important blockchain infrastructures, the public chain can benefit the exchanges behind it. As a platform for developing public chain technology exchanges, CoinEx’s main competitors in the field of public chain development include Binance, Huobi, and OKEx. Although they are all exchange platforms for deploying public chains, the above four are different in terms of specific functions, economic models, and critical points of the public chain.
3.1 Development progress comparison
In 2019, Binance became the first exchange to launch a public chain among all digital asset exchanges, and its main product is Binance exchange (DEX). In April 2020, Binance announced the launch of a second smart contract chain, using Ethereum’s virtual machine, so that developers can build decentralized applications without affecting the performance and functionality of their original chain. OKEx launched OKChain’s testnet in February 2020 and completed open source two months later. OKChain is designed as the basis of large-scale blockchain-driven business applications, with the characteristics of source code decentralization, point-to-point, irreversibility, and efficient autonomy. Huobi released Huobi Chain for the first time in July 2019, the code is open source, and the testnet was released in February 2020. As a “regulator-friendly financial blockchain”, Huobi Chain focuses on providing compliance services for companies and financial institutions. The CoinEx public chain officially completed the main online launch in November 2019 and completed the new block browser’s launch in March 2020. On April 3, 2020, CoinEx DEX uploaded the underlying code to Github to achieve open source. The CoinEx public chain is more inclined to build a full DEX ecosystem to achieve a one-stop solution for issuing, listing, storing, and trading. The long-term goal is to create a blockchain financial infrastructure.
3.2 Comparison of economic models
At present, the exchange is more inclined to use its existing platform currency as the native token of the public chain in the construction of public chain ecology. CoinEx’s CET, Binance’s BNB, and Huobi’s HT all fall into this category. OKEx is the only exchange that issues new tokens for its OKChain, which means OKT is the only ‘inflation token’ in the exchange’s public chain, while CET, HT, and BNB are all deflationary.
3.3 Decentralization of public chain
The initial number of CoinEx public chain verification nodes is 42, which is currently the most decentralized among all exchange public chains, and able to take both efficiency and decentralization into account; OKChain also currently has a relatively high degree of decentralization in the exchange public chain (21 verification nodes), its nodes have a high degree of autonomy; by contrast, Binance still firmly controls the operation of nodes and transactions; In terms of encourages cooperation between regulators and the private financial aspects, Huobi provides a lesser degree of decentralization. Huobi Chain uses a variant of the DPoS consensus algorithm to provide functions such as “supervision nodes”, allowing regulators to become validators. Comparison of some dimensions of CoinEx, Huobi, Binance and OKEx public chain，Source: TokenInsight
4. Token Economy
CoinEx Token (CET) is a native token of the CoinEx ecosystem. It was issued in January 2018. Token holders can enjoy some user value-added services within the ecosystem. Currently, it is mainly used as a native token on the CoinEx Chain. As of 11 am on April 23, 2020, the current circulation of CET tokens in the market is 3,215,354,906.31, with a total of 5,842,177,609.53. CET tokens will not be further issued or inflated. Currently, daily repurchase and quarterly destruction are carried out. The repurchase destruction dynamics can now be tracked real-time on the CET repurchase system on the platform.
4.1 Token Distribution
The CET token used to be based on the ERC-20 token developed by Ethereum. Since the CoinEx Chain mainnet was launched in November 2019, some ERC-20 CET tokens have been mapped to the mainnet CET, and the rest of the CET will be mapped before November 10, 2020. CET holders need to deposit ERC-20 CET to the COinEX exchange, and the exchange will conduct the main network mapping. At present, CET is mainly circulated in the form of mainnet tokens, and only a small portion of ERC-20 CET has not been mapped. The distribution of token holdings currently circulating on the mainnet can be seen in the figure below. At present, the number of tokens held by the top ten holders accounts for about 60.44% of all mainnet CET tokens. Distribution of CET token holding addresses，Source: Etherscan; TokenInsight The following figure shows the initial distribution of tokens after the mainnet mapping preset by CoinEx. From the initial distribution map of CET, it shows that, after mapping, a large portion of CET remains concentrated in the hands of the team (31%), and the actual number of CET circulating in the market only accounts for 49% of the total. The initial distribution of CET token，Source: CoinEx; TokenInsight After the main net mapping, the 31% of the total CET (1.8 billion) held by the team will be gradually unlocked in the five years from 2020 to 2024, and 360 million CET will be unlocked each year. By 2024, the CET held by the team will be completely unlocked. From the current CET dynamics, the CET share held by some teams has been used for destruction purposes to achieve the purpose of CET austerity. If the frozen 1.8 billion CET held by the team are used for similar purposes, the development of CET and its platform can benefit from it. Team’s CET unlocking plan，Source: CoinEx; TokenInsight
4.2 Token economic model
4.2.1 Deflation mechanism Since the CET token went online in January 2018, CoinEx has increased the circulation of CET through airdrops, transaction fee refunds, operation promotion, and team unlocking. As one of the existing platform coins with long development time, the deflation mechanism of CET token has undergone a series of changes with the development of the industry. In 2018, when the concept of coin-based mining prevailed, CET used transaction mining, stake mining, and pending order mining, which were cancelled in October, December and, April respectively of the following year. The repurchase and destruction model currently used by CET was updated by CoinEx on April 11, 2020. The original CET quarterly repurchase and destruction policy of the platform will be adjusted to daily repurchase and quarterly destruction. After the implementation of the daily repurchase policy, CoinEx will take out 50% of the daily fee income for CET repurchase in the secondary market and implement quarterly destruction until the total remaining circulation is 3 billion (currently about 5.8 billion). At the same time that CoinEx updated the repurchase and destruction plan on April 11, the platform also launched a page dedicated to displaying CET repurchase information, so that users can clearly understand the progress of CET repurchase and destruction. As of April 23, 2020, the platform has destroyed 4,157,822,390.46 CET tokens, accounting for 41.6% of the initial total issuance. At the end of January 2019, it had destroyed 4 billion CETs (single destruction volume peak) at the end of this quarter. The number of CETs to be destroyed is 3,422,983.56. CET historical destruction data，Source: CoinEx; TokenInsight 4.2.2 Application scenarios The current usage scenarios of CET are discounted platform transaction fees, VIP services, special activities rights and interests, CoinEx Chain internal circulation fuel, and use of external scenarios. Deduction and discount of platform transaction fees CoinEx platform users can use CET to deduct transaction fees when conducting transactions within the platform. At the same time, using CET to pay transaction fees can enjoy the exclusive preferential rates provided by the platform. CET fee discount amount，Source：CoinEx; TokenInsight VIP service Holding a certain number of CETs can make a user become a platform VIP user. Users can also use CET to purchase platform VIPs to obtain corresponding privileges such as discounted rates, accelerated withdrawals, and exclusive customers. Special activity rights CET holders can enjoy special rights and interests in platform marketing activities, such as participating in the airdrop of tokens on the platform or accelerating opportunities for high-quality projects. CoinEx Chain built-in token CET will serve as a native token of CoinEx Chain, circulate and serve as fuel in CoinEx Chain, and users can also use CET to invest or trade other digital assets. In addition, CET can also serve as transaction fees and function fees (issuing Token, creating new trading pairs, account activation), etc. in the platform, and users can also participate in the campaign of validators by staking CET tokens. CET is currently used as a circulation token as well for CoinEx DEX to issue tokens, create orders, Bancor, address activation, set address aliases, and other application scenarios. In general, the types of application scenarios of CET are not plenty enough. In order to better develop the internal ecosystem of the platform, it is necessary to design and develop more CET usage scenarios and incentive mechanisms to increase the retention rate of users while adding new users. 4.2.3 Token incentive As the native token of the CoinEx public chain, CET will be used as a block incentive to increase community participation after the mainnet of the public chain launched. The 315 million CET held by the foundation in the total CET issuance will be used to incentivize initial verification nodes and Staking participants. CET annual incentive information，Source：CoinEx; TokenInsight
CoinEx’s investment is led by Bitmain and its main partners include Matrixport, Bitcoin.com, CoinBull, Consensus Lab, BTC.com, BTC.top, Hoo Exchange, Wa Yi, ChainFor.com, etc. Investment institutions and major partners have rich experience in the industry, which can promote the development of projects to a certain extent. However, the current industry involved by the partners is not wide enough, and it will have a limited role in promoting the future of CoinEx’s enriching business lines and increasing ecosystem functions. https://preview.redd.it/zjgzvv6ise551.png?width=533&format=png&auto=webp&s=a3f7fe3abb2c2d522e289213ae6fbc4e899825e0
6. Community Analysis
According to TokenInsight’s research of the CoinEx platform community, as of April 23, 2020, its official Twitter has 19,800 followers and 932 tweets; the official Telegram has 45 official groups, 3 in Chinese and English, and the other is Korean, Arabic, Vietnamese, Indian and other small language groups, with a total number of 56088 people; the current number of followers on Facebook accounts is 3,107. The overall community followers still have a lot of room for improvement, and community activeness needs to be improved. Number of followers on the CoinEx social platform，Source:TokenInsight At present, the project’s search popularity and official website visits are both top-notch, and monthly visits have slowly returned to their previous visit levels after experiencing a significant decline in December 2019. CoinEx visit popularity，Source: TokenInsight, Similarweb, Google At present, the visitors of the CoinEx website are distributed in multiple countries, and there are no visits concentration from a single country or region. Therefore, CoinEx’s comprehensive global influence is widely distributed and has a reasonable degree of internationalization. CoinEx official website’s top 5 countries by number of visitors，Source: CoinEx, TokenInsight Original article Click here to register on CoinEx!
Perpetuals, Futures, and Options can present quite a steep learning curve, fear not though as we have an incredible collection of Google Sheets and Excel Spreadsheets to help both the basic as well as most advanced users! We can also strongly recommend reading our Educational and Market Research articles as many traders find them to be invaluable resources. One of our talented Community Managers, Cryptarbitrage, has created and maintains to the best of his ability a series of tools to both help Deribit users learn more about BTC & ETH Perpetuals, Futures, and Options as well support more advanced traders increasing technical needs. A short introduction by Cryptarbitrage: "Although I was aware of options beforehand I only started properly researching them in early 2018 after I discovered the Bitcoin options on Deribit. I do not need much encouragement to build a spreadsheet for something so quickly set about created an Excel sheet that would show me the profit and loss of any options position I entered. This was a great way to learn all the profit and loss formulas for each type of option as well as how different option combinations interacted with each other. As soon as this sheet was complete I was building positions I still didn’t even know the proper names for so was very much learning by doing. It was immediately obvious to me though that options were the type of instruments I wanted to trade. After a few months and once I’d done some more reading and was more confident I actually knew what I was talking about I began creating shareable versions in google sheets and sharing them with the Deribit community." Feel free to ask for some help or guidance in our English Telegram Community. Cryptarbitrage’s Twitter: https://twitter.com/cryptarbitrage Cryptarbitrage’s Telegram: u/Cryptarbitrage English Telegram Community: https://t.me/deribit Deribit's Position Builder Link: pb.deribit.com It is invaluable to be able to see the potential profit/loss, implied volatility of a single or multiple positions quickly, and adhoc. This allows you to check the results of either simulated positions, the live positions of your account, or a combination of these all across multiple instruments including Perpetuals, Futures, and Options at the same time. The Position Builder can be used to analyze the results of either existing or simulated results. As it uses market data from Deribit it provides a quick tool to check the results before adding positions into a portfolio. Development Credit to the core Deribit development team Scenario Risk Analysis “Maximum Pain” - Excel Spreadsheet Link: https://drive.google.com/file/d/1ANS1CgApJCDTX5ZjUwO_fegU7Z-QVSdt/view A resource to visualize the Open Interest at the present moment as well as the current price of maximum pain for option buyers.
https://codevalley.com/whitepaper.pdf This document treats Emergent coding from a philosophical perspective. It has a good introduction, description of the tech and is followed by two sections on justifications from the perspective of Fred Brooks No Silver Bullet criteria and an industrialization criteria.
Mark Fabbro's presentation from the Bitcoin Cash City Conference which outlines the motivation, basic mechanics, and usage of Bitcoin Cash in reproducing the industrial revolution in the software industry.
Building the Bitcoin Cash City presentation highlighting how the emergent coding group of companies fit into the adoption roadmap of North Queensland.
Forging Chain Metal by Paul Chandler CEO of Aptissio, one of startups in the emergent coding space and which secured a million in seed funding last year.
Bitcoin Cash App Exploration A series of Apps that are some of the first to be built by emergent coding and presented, and in the case of Cashbar, demonstrated at the conference.
How does Emergent Coding prevent developer capture? A developer's Agent does not know what project they are contributing to and is thus paid for the specific contribution. The developer is controlling the terms of the payment rather than the alternative, an employer with an employment agreement. Why does Emergent Coding use Bitcoin BCH?
Both emergent coding and Bitcoin BCH are decentralized: As emergent coding is a decentralized development environment consisting of Agents providing respective design services, each contract received by an agent requires a BCH payment. As Agents are hosted by their developer owners which may be residing in one of 150 countries, Bitcoin Cash - an electronic peer-to-peer electronic cash system - is ideal to include a developer regardless of geographic location.
Emergent coding will increase the value of the Bitcoin BCH blockchain: With EC, there are typically many contracts to build an application (Cashbar was designed with 10000 contracts or so). EC adoption will increase the value of the Bitcoin BCH blockchain in line with this influx of quality economic activity.
Emergent coding is being applied to BCH software first: One of the first market verticals being addressed with emergent coding is Bitcoin Cash infrastructure. We are already seeing quality applications created using emergent coding (such as the HULA, Cashbar, PH2, vending, ATMs etc). More apps and tools supporting Bitcoin cash will attract more merchants and business to BCH.
Emergent coding increases productivity: Emergent coding increases developer productivity and reduces duplication compared to other software development methods. Emergent coding can provide BCH devs with an advantage over other coins. A BCH dev productivity advantage will accelerate Bitcoin BCH becoming the first global currency.
Emergent coding produces higher quality binaries: Higher quality software leads to a more reliable network.
1. Who/what is Code Valley? Aptissio? BCH Tech Park? Mining and Server Complex? Code Valley Corp Pty Ltd is the company founded to commercialize emergent coding technology. Code Valley is incorporated in North Queensland, Australia. See https://codevalley.com Aptissio Australia Pty Ltd is a company founded in North Queensland and an early adopter of emergent coding. Aptissio is applying EC to Bitcoin BCH software. See https://www.aptissio.com Townsville Technology Precincts Pty Ltd (TTP) was founded to bring together partners to answer the tender for the Historic North Rail Yard Redevelopment in Townsville, North Queensland. The partners consist of P+I, Conrad Gargett, HF Consulting, and a self-managed superannuation fund(SMSF) with Code Valley Corp Pty Ltd expected to be signed as an anchor tenant. TTP answered a Townsville City Council (TCC) tender with a proposal for a AUD$53m project (stage 1) to turn the yards into a technology park and subsequently won the tender. The plan calls for the bulk of the money is to be raised in the Australian equity markets with the city contributing $28% for remediation of the site and just under 10% from the SMSF. Construction is scheduled to begin in mid 2020 and be competed two years later. Townsville Mining Pty Ltd was set up to develop a Server Complex in the Kennedy Energy Park in North Queensland. The site has undergone several studies as part of a due diligence process with encouraging results for its competitiveness in terms of real estate, power, cooling and data.
TM are presently in negotiations with the owners of the site and is presently operating under an NDA.
The business model calls for leasing "sectors" to mining companies that wish to mine allowing companies to control their own direction.
Since Emergent Coding uses the BCH rail, TM is seeking to contribute to BCH security with an element of domestic mining.
TM are working with American partners to lease one of the sectors to meet that domestic objective.
The site will also host Emergent Coding Agents and Code Valley and its development partners are expected to lease several of these sectors.
TM hopes to have the site operational within 2 years.
2. What programming language are the "software agents" written in. Agents are "built" using emergent coding. You select the features you want your Agent to have and send out the contracts. In a few minutes you are in possession of a binary ELF. You run up your ELF on your own machine and it will peer with the emergent coding and Bitcoin Cash networks. Congratulations, your Agent is now ready to accept its first contract. 3. Who controls these "agents" in a software project You control your own Agents. It is a decentralized development system. 4. What is the software license of these agents. Full EULA here, now. A license gives you the right to create your own Agents and participate in the decentralized development system. We will publish the EULA when we release the product. 5. What kind of software architecture do these agents have. Daemons Responding to API calls ? Background daemons that make remote connection to listening applications? Your Agent is a server that requires you to open a couple of ports so as to peer with both EC and BCH networks. If you run a BCH full node you will be familiar with this process. Your Agent will create a "job" for each contract it receives and is designed to operate thousands of jobs simultaneously in various stages of completion. It is your responsibility to manage your Agent and keep it open for business or risk losing market share to another developer capable of designing the same feature in a more reliable manner (or at better cost, less resource usage, faster design time etc.). For example, there is competition at every classification which is one reason emergent coding is on a fast path for improvement. It is worth reiterating here that Agents are only used in the software design process and do not perform any role in the returned project binary. 6. What is the communication protocol these agents use. The protocol is proprietary and is part of your license. 7. Are the agents patented? Who can use these agents? It is up to you if you want to patent your Agent the underlying innovation behind emergent coding is _feasible_ developer specialization. Emergent coding gives you the ability to contribute to a project without revealing your intellectual property thus creating prospects for repeat business; It renders software patents moot. Who uses your Agents? Your Agents earn you BCH with each design contribution made. It would be wise to have your Agent open for business at all times and encourage everyone to use your design service. 8. Do I need to cooperate with Code Valley company all of the time in order to deploy Emergent Coding on my software projects, or can I do it myself, using documentation? It is a decentralized system. There is no single point of failure. Code Valley intends to defend the emergent coding ecosystem from abuse and bad actors but that role is not on your critical path. 9. Let's say Electron Cash is an Emergent Coding project. I have found a critical bug in the binary. How do I report this bug, what does Jonald Fyookball need to do, assuming the buggy component is a "shared component" puled from EC "repositories"? If you built Electron Cash with emergent coding it will have been created by combining several high level wallet features designed into your project by their respective Agents. Obviously behind the scenes there are many more contracts that these Agents will let and so on. For example the Cashbar combines just 16 high level Point-of-Sale features but ultimately results in more than 10,000 contracts in toto. Should one of these 10,000 make a design error, Jonald only sees the high level Agents he contracted. He can easily pinpoint which of these contractors are in breach. Similarly this contractor can easily pinpoint which of its sub-contractors is in breach and so on. The offender that breached their contract wherever in the project they made their contribution, is easily identified. For example, when my truck has a warranty problem, I do not contact the supplier of the faulty big-end bearing, I simply take it back to Mazda who in turn will locate the fault. Finally "...assuming the buggy component is a 'shared component' puled from EC 'repositories'?" - There are no repositories or "shared component" in emergent coding. 10. What is your licensing/pricing model? Per project? Per developer? Per machine? Your Agent charges for each design contribution it makes (ie per contract). The exact fee is up to you. The resulting software produced by EC is unencumbered. Code Valley's pricing model consists of a seat license but while we are still determining the exact policy, we feel the "Valley" (where Agents advertise their wares) should charge a small fee to help prevent gaming the catalogue and a transaction fee to provide an income in proportion to operations. 11. What is the basic set of applications I need in order to deploy full Emergent Coding in my software project? What is the function of each application? Daemons, clients, APIs, Frontends, GUIs, Operating systems, Databases, NoSQLs? A lot of details, please. There's just one. You buy a license and are issued with our product called Pilot. You run Pilot (node) up on your machine and it will peer with the EC and BCH networks. You connect your browser to Pilot typically via localhost and you're in business. You can build software (including special kinds of software like Agents) by simply combining available features. Pilot allows you to specify the desired features and will manage the contracts and decentralized build process. It also gives you access to the "Valley" which is a decentralized advertising site that contains all the "business cards" of each Agent in the community, classified into categories for easy search. If we are to make a step change in software design, inventing yet another HLL will not cut it. As Fred Brooks puts it, an essential change is needed. 12. How can I trust a binary when I can not see the source? The Emergent Coding development model is very different to what you are use to. There are ways of arriving at a binary without Source code. The Agents in emergent coding design their feature into your project without writing code. We can see the features we select but can not demonstrate the source as the design process doesn't use a HLL. The trust model is also different. The bulk of the testing happens _before_ the project is designed not _after_. Emergent Coding produces a binary with very high integrity and arguably far more testing is done in emergent coding than in incumbent methods you are used to. In emergent coding, your reputation is built upon the performance of your Agent. If your Agent produces substandard features, you are simply creating an opportunity for a competitor to increase their market share at your expense. Here are some points worth noting regarding bad actor Agents:
An Agent is a specialist and in emergent coding is unaware of the project they are contributing to. If you are a bad actor, do you compromise every contract you receive? Some? None?
Your client is relying on the quality of your contribution to maintain their own reputation. Long before any client will trust your contributions, they will have tested you to ensure the quality is at their required level. You have to be at the top of your game in your classification to even win business. This isn't some shmuck pulling your routine from a library.
Each contract to your agent is provisioned. Ie you advertise in advance what collaborations you require to complete your design. There is no opportunity for a "sign a Bitcoin transaction" Agent to be requesting "send an HTTP request" collaborations.
Your Agent never gets to modify code, it makes a design contribution rather than a code contribution. There is no opportunity to inject anything as the mechanism that causes the code to emerge is a higher order complexity of all Agent involvement.
There is near perfect accountability in emergent coding. You are being contracted and paid to do the design. Every project you compromise has an arrow pointed straight at you should it be detected even years later.
Security is a whole other ball game in emergent coding and current rules do not necessarily apply. 13. Every time someone rebuilds their application, do they have to pay over again for all "design contributions"? (Or is the ability to license components at fixed single price for at least a limited period or even perpetually, supported by the construction (agent) process?) You are paying for the design. Every time you build (or rebuild) an application, you pay the developers involved. They do not know they are "rebuilding". This sounds dire but its costs far less than you think and there are many advantages. Automation is very high with emergent coding so software design is completed for a fraction of the cost of incumbent design methods. You could perhaps rebuild many time before matching incumbent methods. Adding features is hard with incumbent methods "..very few late-stage additions are required before the code base transforms from the familiar to a veritable monster of missed schedules, blown budgets and flawed products" (Brooks Jr 1987) whereas with emergent coding adding a late stage feature requires a rebuild and hence seamless integration. With Emergent Coding, you can add an unlimited number of features without risking the codebase as there isn't one. The second part of your question incorrectly assumes software is created from licensed components rather than created by paying Agents to design features into your project without any licenses involved. 14. In this construction process, is the vendor of a particular "design contribution" able to charge differential rates per their own choosing? e.g. if I wanted to charge a super-low rate to someone from a 3rd world country versus charging slightly more when someone a global multinational corporation wants to license my feature? Yes. Developers set the price and policy of their Agent's service. The Valley (where your Agent is presently advertised) presently only supports a simple price policy. The second part of your question incorrectly assumes features are encumbered with licenses. A developer can provide their feature without revealing their intellectual property. A client has the right to reuse a developer's feature in another project but will find it uneconomical to do so. 15. Is "entirely free" a supported option during the contract negotiation for a feature? Yes. You set the price of your Agent. 16. "There is no single point of failure." Right now, it seems one needs to register, license the construction tech etc. Is that going to change to a model where your company is not necessarily in that loop? If not, don't you think that's a single point of failure? It is a decentralized development system. Once you have registered you become part of a peer-to-peer system. Code Valley has thought long and hard about its role and has chosen the reddit model. It will set some rules for your participation and will detect or remove bad actors. If, in your view, Code Valley becomes a bad actor, you have control over your Agent, private keys and IP, you can leave the system at any time. 17. What if I can't obtain a license because of some or other jurisdictional problem? Are you allowed to license the technology to anywhere in the world or just where your government allows it? We are planning to operate in all 150 countries. As ec is peer-to-peer, Code Valley does not need to register as a digital currency exchange or the like. Only those countries banning BCH will miss out (until such times as BCH becomes the first global electronic cash system). 18.
For example the Cashbar combines just 16 high level Point-of-Sale features but ultimately results in more than 10,000 contracts in toto.
It seems already a reasonably complex application, so well done in having that as a demo. Thank you. 19. I asked someone else a question about how it would be possible to verify whether an application (let's say one received a binary executable) has been built with your system of emergent consensus. Is this possible? Yes of course. If you used ec to build an application, you can sign it and claim anything you like. Your client knows it came from you because of your signature. The design contributions making up the application are not signed but surprisingly there is still perfect accountability (see below). 20. I know it is possible to identify for example all source files and other metadata (like build environment) that went into constructing a binary, by storing this data inside an executable. All metadata emergent coding is now stored offline. When your Agent completes a job, you have a log of the design agreements you made with your peers etc., as part of the log. If you are challenged at a later date for breaching a design contract, you can pull your logs to see what decisions you made, what sub-contracts were let etc. As every Agent has their own logs, the community as a whole has a completely trustless log of each project undertaken. 21. Is this being done with EC build products and would it allow the recipient to validate that what they've been provided has been built only using "design contributions" cryptographically signed by their providers and nothing else (i.e. no code that somehow crept in that isn't covered by the contracting process)? The emergent coding trust model is very effective and has been proven in other industries. Remember, your Agent creates a feature in my project by actually combining smaller features contracted from other Agents, thus your reputation is linked to that of your suppliers. If Bosch makes a faulty relay in my Ford, I blame Ford for a faulty car not Bosch when my headlights don't work. Similarly, you must choose and vet your sub-contractors to the level of quality that you yourself want to project. Once these relationships are set up, it becomes virtually impossible for a bad actor to participate in the system for long or even from the get go. 22. A look at code generated and a surprising answer to why is every intermediate variable spilled? Thanks to u/R_Sholes, this snippet from the actual code for: number = number * 10 + digitgenerated as a part of: sub read/integeboolean($, 0, 100) -> guess
; copy global to local temp variable 0x004032f2 movabs r15, global.current_digit 0x004032fc mov r15, qword [r15] 0x004032ff mov rax, qword [r15] 0x00403302 movabs rdi, local.digit 0x0040330c mov qword [rdi], rax ; copy global to local temp variable 0x0040330f movabs r15, global.guess 0x00403319 mov r15, qword [r15] 0x0040331c mov rax, qword [r15] 0x0040331f movabs rdi, local.num 0x00403329 mov qword [rdi], rax ; multiply local variable by constant, uses new temp variable for output 0x0040332c movabs r15, local.num 0x00403336 mov rax, qword [r15] 0x00403339 movabs rbx, 10 0x00403343 mul rbx 0x00403346 movabs rdi, local.num_times_10 0x00403350 mov qword [rdi], rax ; add local variables, uses yet another new temp variable for output 0x00403353 movabs r15, local.num_times_10 0x0040335d mov rax, qword [r15] 0x00403360 movabs r15, local.digit 0x0040336a mov rbx, qword [r15] 0x0040336d add rax, rbx 0x00403370 movabs rdi, local.num_times_10_plus_digit 0x0040337a mov qword [rdi], rax ; copy local temp variable back to global 0x0040337d movabs r15, local.num_times_10_plus_digit 0x00403387 mov rax, qword [r15] 0x0040338a movabs r15, global.guess 0x00403394 mov rdi, qword [r15] 0x00403397 mov qword [rdi], rax For comparison, an equivalent snippet in C compiled by clang without optimizations gives this output: imul rax, qword ptr [guess], 10 add rax, qword ptr [digit] mov qword ptr [guess], rax
Collaborations at the byte layer of Agents result in designs that spill every intermediate variable. Firstly, why this is so? Agents from this early version only support one catch-all variable design when collaborating. Similar to a compiler when all registers contain variables, the compiler must make a decision to spill a register temporarily to main memory. The compiler would still work if it spilled every variable to main memory but would produce code that would be, as above, hopelessly inefficient. However, by only supporting the catch-all portion of the protocol, the code valley designers were able to design, build and deploy these agents faster because an Agent needs fewer predicates in order to participate in these simpler collaborations. The protocol involved however, can have many "Policies" besides the catch-all default policy (Agents can collaborate over variables designed to be on the stack, or, as is common for intermediate variables, designed to use a CPU register, and so forth). This example highlights one of the very exciting aspects of emergent coding. If we now add a handful of additional predicates to a handful of these byte layer agents, henceforth ALL project binaries will be 10x smaller and 10x faster. Finally, there can be many Agents competing for market share at each of classification. If these "gumby" agents do not improve, you can create a "smarter" competitor (ie with more predicates) and win business away from them. Candy from a baby. Competition means the smartest agents bubble to the top of every classification and puts the entire emergent coding platform on a fast path for improvement. Contrast this with incumbent libraries which does not have a financial incentive to improve. Just wait until you get to see our production system. 23. How hard can an ADD Agent be? Typically an Agent's feature is created by combining smaller features from other Agents. The smallest features are so devoid of context and complexity they can be rendered by designing a handful of bytes in the project binary. Below is a description of one of these "byte" layer Agents to give you an idea how they work. An "Addition" Agent creates the feature of "adding two numbers" in your project (This is an actual Agent). That is, it contributes to the project design a feature such that when the project binary is delivered, there will be an addition instruction somewhere in it that was designed by the contract that was let to this Agent. If you were this Agent, for each contract you received, you would need to collaborate with peers in the project to resolve vital requirements before you can proceed to design your binary "instruction". Each paid contract your Agent receives will need to participate in at least 4 collaborations within the design project. These are:
Input A collaboration
Input B collaboration
Construction site collaboration
You can see from the collaborations involved how your Agent can determine the precise details needed to design its instruction. As part of the contract, the Addition Agent will be provisioned with contact details so it can join these collaborations. Your Agent must collaborate with other stakeholders in each collaboration to resolve that requirement. In this case, how a variable will be treated. The stakeholders use a protocol to arrive at an Agreement and share the terms of the agreement. For example, the stakeholders of collaboration “Input A” may agree to treat the variable as an signed 64bit integer, resolve to locate it at location 0x4fff2, or alternatively agree that the RBX register should be used, or agree to use one of the many other ways a variable can be represented. Once each collaboration has reached an agreement and the terms of that agreement distributed, your Agent can begin to design the binary instruction. The construction site collaboration is where you will exactly place your binary bytes. The construction site protocol is detailed in the whitepaper and is some of the magic that allows the decentralized development system to deliver the project binary. The protocol consists of 3 steps,
You request space in the project binary be reserved.
You are notified of the physical address of your requested space.
You delver the the binary bytes you designed to fill the reserved space.
Once the bytes are returned your Agent can remove the job from its work schedule. Job done, payment received, another happy customer with a shiny ADD instruction designed into their project binary. Note:
Observe how it is impossible for this ADD Agent to install a backdoor undetected by the client.
Observe how the Agent isn’t linking a module, or using a HLL to express the binary instruction.
Observe how with just a handful of predicates you have a working "Addition" Agent capable of designing the Addition Feature into a project with a wide range of collaboration agreements.
Observe how this Agent could conceivably not even design-in an ADD instruction if one of the design time collaboration agreements was a literal "1" (It would design in an increment instruction). There is even a case where this Agent may not deliver any binary to build its feature into your project!
24. How does EC arrive at a project binary without writing source code? Devs using EC combine features to create solutions. They don't write code. EC devs contract Agents which design the desired features into their project for a fee. Emergent coding uses a domain specific contracting language (called pilot) to describe the necessary contracts. Pilot is not a general purpose language. As agents create their features by similarly combining smaller features contracted from peer, your desired features may inadvertently result in thousands of contracts. As it is agents all the way down, there is no source code to create the project binary. Traditional: Software requirements -> write code -> compile -> project binary (ELF). Emergent coding: Select desired features -> contract agents -> project binary (ELF). Agents themselves are created the same way - specify the features you want your agent to have, contract the necessary agents for those features and viola - agent project binary (ELF). 25. How does the actual binary code that agents deliver to each other is written? An agent never touches code. With emergent coding, agents contribute features to a project, and leave the project binary to emerge as the higher-order complexity of their collective effort. Typically, agents “contribute” their feature by causing smaller features to be contributed by peers, who in turn, do likewise. By mapping features to smaller features delivered by these peers, agents ensure their feature is delivered to the project without themselves making a direct code contribution. Peer connections established by these mappings serve to both incrementally extend a temporary project “scaffold” and defer the need to render a feature as a code contribution. At the periphery of the scaffold, features are so simple they can be rendered as a binary fragment with these binary fragments using the information embodied by the scaffold to guide the concatenation back along the scaffold to emerge as the project binary - hence the term Emergent Coding. Note the scaffold forms a temporary tree-like structure which allows virtually all the project design contracts to be completed in parallel. The scaffold also automatically limits an agent's scope to precisely the resources and site for their feature. It is why it is virtually impossible for an agent to install a "back door" or other malicious code into the project binary.
Lisk Highlights Weekly roundup March 9th 2019. The week in which a Lisk Sidechain Project became a Founding Member of a Brussels-based Blockchain Organization.
Hello everybody. The LISK project and it's enthusiasts are always busy, and this week past has certainly been no exception. Seeing is believing, so here is a recap of the highlights and interesting items from the past week on the LISK subreddit and beyond.....
Lisk, Hong Kong Future and Costa Rica Past.
Asia Crypto Week is fast approaching (11-17th March) and blockchain/crypto enthusiasts and industry veterans are preparing to gather together, share their knowledge and nurture mass crypto adoption. Among the events taking place on March 15th will be a meetup at the University of Hong Kong catering to the University's Blockchain Club and anyone else that might be in the area and interested in all things blockchain. Lisk is co-hosting the event in collaboration with 9up.io, who are a group of blockchain enthusiasts based in Hong Kong and who also are a prospective Lisk delegate. Max Kordek, Lisk's Co-Founder and CEO, will be the main speaker at this event on the 15th March, as he will be in town from the 10th to 15th for Asia Crypto Week and Token 2049. Tickets for the event can be secured HERE. Hong Kong has been a strategic position in the blooming Blockchain industry in recent years, so I will be interested to see what emerges from this meetup and indeed Asia Crypto Week as a whole.
Now from the future to the past, and the TicoBlockChain 2019 conference in Costa Rica this past month. Lisk Central America have linked us up with a Stylish Montage Video of the event with interview snippets interspersed within. Software Architect, Jake Simmons, represented LISK Central America with his presentation on 'Scaling blockchain horizontally with Lisk'. Jake's presentation took part in the midst of the conference's speaker collection of lawyers, developers, educators, banking executives, investment professionals, their keynotes, panel talks and fireside chats. For those of us who could not make the trip we have Lisk community member illuciferium to thank for filming Jake's presentation and uploading it to Youtube HERE. You can also see Jake being interviewed at the conference by Ricardo Barquero, Nimiq Community Manager in this VIDEO. Well done all!
Lisk Support Adds a Meetup Map.
LISK support and TonyT908 are back with a new way for Liskers to visualise all the upcoming meetups and events related to the project. The Events Map is a more visually exciting way to discover all the upcoming events around the globe rather than reading through reams and reams of text. When you are ready to delve more into the details of a particular meetup then you can visit the official Lisk Events page to read further details. Special thanks should be given to Global Delegate Team (GDT) for their guidance to TonyT908 and for providing the funds necessary to license to mapping software. Edward Trosclair AKA StellarDynamic came up with the original concept, so he should take a lot of praise also. Great work all round, folks.
Lisk Sidechain Project Knows the Key is to Stand Out from the Crowd.
Chief R&D Officer and Co-founder of GNY (bringing Machine Learning to Lisk), Richard Jarritt informed the project's followers on the GNY telegram that "only by having a platform that is the first to crack machine learning on chain is how we can differentiate ourselves from the countless projects in our space". He continued, "I look at the whole crypto space and at the moment having working code is key, that is the drive here". So on to the coding and Machine learning integration, how is that going? Well, this week Leo Liang, Head of Blockchain for GNY will be presenting the coding solution for how information is read by the machine learning off the chain. The GNY team always has tech meetings on Tuesdays to present the work that has been done the week previously, where they update each other on progress and then set the next task. Upcoming shortly for the team will be a demo of how the read function and Machine learning are running together and moving onto the reply function. The GNY Github is due to go live by the end of this month, following an important GNY staff conference in London. It's going to be an exciting month and I am really looking forward to it.
Lisk Sidechain Project becomes a Founding Member of a Brussels-based Blockchain Organization.
On the 6th of March the MADANA project were one of the 105 companies, startups and organizations that came together to found INATBA. The International Association for Trusted Blockchain Applications (INATBA) which will be a Brussels-based organization working to make blockchain more accessible, safe, and usable for everyone. This was an initiative of the EU commission and the Directorate General for Communications Networks, Content and Technology or "DG Connect", whose responsibility is managing the Digital Agenda across Europe. It is hoped that this will now allow projects in the blockchain and distributed ledger technology eco-system have access to a global forum to interact with regulators & policy makers.
Next up for the INATBA is its first General Assembly on 3rd April where the 105 founding members will hopfully be joined by fresh additions to the memberlist. The INATBA website is now live at http://inatba.org and they have a "Join" page which is accepting applications for new members to join Madana and the likes of Iota, Cardano, Gnosis, and the Quant Network for the April 3rd launch!
Why Verge Needs DigiShield NOW! And Why DigiByte Is SAFE!
Hello everyone, I’m back! Someone asked a question recently on what exactly happened to XVG – Verge and if this could be a problem for DGB – DigiByte - Here: DigiByte vs Verge It was a great question and there have been people stating that this cannot be a problem for us because of DigiShield etc… with not much explanation after that. I was curious and did a bit more investigating to figure out what happened and why exactly it is that we are safe. So take a read.
Some Information on Verge
Verge was founded in 2014 with code based on DogeCoin, it was initially named DogeCoinDark, it later was renamed Verge XVG in 2016. Verge has 5 mining algorithms as does DigiByte. Those being:
However, unlike DigiByte those algorithms do not run side by side. On Verge one block can only be mined by a single algorithm at any time. This means that each algorithm takes turns mining the chain.
Prior to the latest fork there was not a single line of code that forced any algo rotation. They all run in parallel but of course in the end only one block can be accepted at given height which is obvious. After the fork algo rotation is forced so only 6 blocks with the same algo out of any 10 blocks can be accepted. - srgn_
Mining Verge and The Exploit
What happened then was not a 51% attack per say, but the attacker did end up mining 99% of all new blocks so in fact he did have power of over 51% of the chain. The way that Verge is mined allowed for a timestamp exploit. Every block that is mined is dependent on the previous blocks for determining the algorithm to be used (this is part of the exploit). Also, their mining difficulty is adjusted every block (which last 30 seconds also part of the exploit). Algorithms are not picked but in fact as stated previously compete with one another. As for difficulty:
Difficulty is calculated by a version of DGW which is based on timestamps of last 12 blocks mined by the same algo. - srgn_
This kind of bug is very serious and at the foundation of Verge’s codebase. In fact, in order to fix it a fork is needed, either hard fork or soft fork! What happened was that the hacker managed to change the time stamps on his blocks. He introduced a pair of false blocks. One which showed that the scrypt mining algorithm had been previously used, about 26 mins before, and then a second block which was mined with scrypt. The chain is set up so that it goes through the 5 different algorithms. So, the first false block shows the chain that the scrypt algorithm had been used in the recent past. This tricks it into thinking that the next algorithm to be used is scrypt. In this way, he was essentially able to mine 99% of all blocks.
Pairs of blocks are used to lower the difficulty but they need to be mined in certain order so they can pass the check of median timestamp of last 11 blocks which is performed in CBlock::AcceptBlock(). There is no tricking anything into thinking that the next algo should be x because there is no algo picking. They all just run and mine blocks constantly. There is only lowering the difficulty, passing the checks so the chain is valid and accepting this chain over chains mined by other algos. - segn_
Here is a snippet of code for what the time stamps on the blocks would look like:
SetBestChain: new best=00000000049c2d3329a3 height=2009406 trust=2009407 date=04/04/18 13:50:09 ProcessBlock: ACCEPTED (scrypt) SetBestChain: new best=000000000a307b54dfcf height=2009407 trust=2009408 date=04/04/18 12:16:51 ProcessBlock: ACCEPTED (scrypt) SetBestChain: new best=00000000196f03f5727e height=2009408 trust=2009409 date=04/04/18 13:50:10 ProcessBlock: ACCEPTED (scrypt) SetBestChain: new best=0000000010b42973b6ec height=2009409 trust=2009410 date=04/04/18 12:16:52 ProcessBlock: ACCEPTED (scrypt) SetBestChain: new best=000000000e0655294c73 height=2009410 trust=2009411 date=04/04/18 12:16:53 ProcessBlock: ACCEPTED (scrypt)
Here’s the first falsified block that was introduced into the XVG chain – Verge-Blockchain.info As you can see there is the first fake block with a time stamp of 13:50:09 for example and the next is set to 12:15:51, the following two blocks are also a fraudulent pair and note that the next block is set to 12:16:52. So essentially, he was able to mine whole blocks - 1 second per block!
This exploit was brought to public attention by ocminer on the bitcointalk forums. It seems the person was a mining pool administrator and noticed the problem after miners on the pool started to complain about a potential bug. What happened next was that Verge developers pushed out a “fix” but in fact did not really fix the issue. What they did was simply diminish the time frame in which the blocks can be mined. The attack still was exploitable and the attacker even went on to try it again! “The background is that the "fix" promoted by the devs simply won't fix the problem. It will just make the timeframe smaller in which the blocks can be mined / spoofed and the attack will still work, just be a bit slower.” - ocminer Ocminer then cited DigiShield as a real fix to the issue! Stating that the fix should also stipulate that a single algo can only be used X amount of times and not be dependent on when the algo was last used. He even said that DigiByte and Myriad had the same problems and we fixed them! He cited this github repo for DigiByte:
It seems that the reason that this exploit was so lucrative was because the difficulty adjustment parameters were not enough to reduce the rewards the attacker recieved. Had the rewards per block adjusted at reasonable rate like we do in DGB then at least the rewards would have dropped significantly per block. The attacker was able to make off with around 60 million Verge which equals about 3.6 million dollars per today’s prices. The exploit used by the attacker depended on the fact that time stamps could be falsified firstly and secondly that the difficulty retargeting parameters were inadequate. Let’s cover how DigiShield works more in detail. One of the DigiByte devs gave us this post about 4 years ago now, and the topic deserves revisiting and updates! I had a hard time finding good new resources and information on the details of DigiShield so I hope you’ll appreciate this review! This is everything I found for now that I could understand hopefully I get more information later and I’ll update this post. Let’s go over some stuff on difficulty first then I’ll try giving you a way to visualise the way these systems work. First you have to understand that mining difficulty changes over time; it has to! Look at Bitcoin’s difficulty for example – Bitcoin difficulty over the past five months. As I covered in another post (An Introduction to DigiByte Difficulty in Bitcoin is readjusted every 2016 blocks which each last about 10 mins each. This can play out over a span of 2 weeks, and that’s why you see Bitcoin’s difficulty graph as a step graph. In general, the hash power in the network increases over time as more people want to mine Bitcoin and thus the difficulty must also increase so that rewards are proportional. The problem with non-dynamic difficulty adjustment is that it allows for pools of miners and or single entities to come into smaller coins and mine them continuously, they essentially get “free” or easily mined coins as the difficulty has not had time to adjust. This is not really a problem for Bitcoin or other large coins as they always have a lot of miners running on their chains but for smaller coins and a few years ago in crypto basically any coin other than Bitcoin was vulnerable. Once the miners had gotten their “free coins” they could then dump the chain and go mine something else – because the difficulty had adjusted. Often chains were left frozen or with very high fees and slow processing times as there was not enough hash power to mine the transactions. This was a big problem in the beginning with DigiByte and almost even killed DogeCoin. This is where our brilliant developers came in and created DigiShield (first known as MultiShield). These three articles are where most of my information came from for DigiShield I had to reread a the first one a few times to understand so please correct me if I make any mistakes! They are in order from most recent to oldest and also in order of relevance.
DigiShield is a system whereby the difficulty for mining DigiByte is adjusted dynamically. Every single block each at 15 seconds has difficulty adjusted for the available hashing power. This means that difficulty in DigiByte is as close as we can get to real time! There are other methods for adjusting difficulty, the first being the Bitcoin/Litecoin method (a moving average calculated every X number of blocks) then the Kimoto Gravity Well is another. The reason that DigiShield is so great is because the parameters are just right for the difficulty to be able to rise and fall in proportion to the amount of hash power available. Note that Verge used a difficulty adjustment protocol more similar to that of DigiByte than Bitcoin. Difficulty was adjusted every block at 30 seconds. So why was Verge vulnerable to this attack? As I stated before Verge had a bug that allowed for firstly the manipulation of time stamps, and secondly did not adjust difficulty ideally. You have to try to imagine that difficulty adjustment chases hashing power. This is because the hashing power on a chain can be seen as the “input” and the difficulty adjustment as the corresponding output. The adjustment or output created is thus dependent on the amount of hashing power input. DigiShield was designed so that increases in mining difficulty are slightly harder to result than decreases in mining difficulty. This asymmetrical approach allows for mining to be more stable on DigiByte than other coins who use a symmetrical approach. It is a very delicate balancing act which requires the right approach or else the system breaks! Either the chain may freeze if hash power increases and then dumps or mining rewards are too high because the difficulty is not set high enough! If you’ve ever taken any physics courses maybe one way you can understand DigiShield is if I were to define it as a dynamic asymmetrical oscillation dampener. What does this mean? Let’s cover it in simple terms, it’s difficult to understand and for me it was easier to visualise. Imagine something like this, click on it it’s a video: Caravan Weight Distribution – made easy. This is not a perfect analogy to what DigiShield does but I’ll explain my idea. The input (hashing power) and the output (difficulty adjustment) both result in oscillations of the mining reward. These two variables are what controls mining rewards! So that caravan shaking violently back and forth imagine those are mining rewards, the weights are the parameters used for difficulty adjustment and the man’s hand pushing on the system is the hashing power. Mining rewards move back and forth (up and down) depending on the weight distribution (difficulty adjustment parameters) and the strength of the push (the amount of hashing power input to the system). Here is a quote from the dev’s article. “The secret to DigiShield is an asymmetrical approach to difficulty re-targeting. With DigiShield, the difficulty is allowed to decrease in larger movements than it is allowed to increase from block to block. This keeps a blockchain from getting "stuck" i.e., not finding the next block for several hours following a major drop in the net hash of coin. It is all a balancing act. You need to allow the difficulty to increase enough between blocks to catch up to a sudden spike in net hash, but not enough to accidentally send the difficulty sky high when two miners get lucky and find blocks back to back.” AND to top it all off the solution to Verge’s time stamp manipulation bug is RIGHT HERE in DigiShield again! This was patched and in Digishield v3 problems #7 Here’s a direct quote: “Most DigiShield v3 implementations do not get data from the most recent blocks, but begin the averaging at the MTP, which is typically 6 blocks in the past. This is ostensibly done to prevent timestamp manipulation of the difficulty.” Moreover, DigiShield does not allow for one algorithm to mine more than 5 blocks in a row. If the next block comes in on the same algorithm then it would be blocked and would be handed off to the next algorithm. DigiShield is a beautiful delicate yet robust system designed to prevent abuse and allow stability in mining! Many coins have adopted out technology!
Verge Needs DigiShield NOW!
The attacker has been identified as IDCToken on the bitcointalk forums. He posted recently that there are two more exploits still available in Verge which would allow for similar attacks! He said this: “Can confirm it is still exploitable, will not abuse it futher myself but fix this problem immediately I'll give Verge some hours to solve this otherwise I'll make this public and another unpatchable problem.” - IDCToken DigiShield could have stopped the time stamp manipulation exploit, and stopped the attacker from getting unjust rewards! Maybe a look at Verge’s difficulty chart might give a good idea of what 1 single person was able to do to a coin worth about 1 billion dollars.
Edit - Made a few mistakes in understanding how Verge is mined I've updated the post and left the mistakes visible. Nothing else is changed and my point still stands Verge could stand to gain something from adopting DigiShield! Hi, I hope you’ve enjoyed my article! I tried to learn as much as I could on DigiShield because I thought it was an interesting question and to help put together our DGB paper! hopefully I made no mistakes and if I did please let me know. -Dereck de Mézquita I'm a student typing this stuff on my free time, help me pay for school? Thank you! D64fAFQvJMhrBUNYpqUKQjqKrMLu76j24g https://digiexplorer.info/address/D64fAFQvJMhrBUNYpqUKQjqKrMLu76j24g
I am looking for a code snippet manager (or, more generally, a text snippet manager) that stores all of its data locally (i.e. not cloud-based). It should be possible to organize code snippets by Snippets Manager. Ask Question Asked 5 years, 9 months ago. ... Perhaps you could use a smart pointer for m_Snippets to make your code more exception safe and free yourself from the burden of deallocating the object by hand. But then again, is it really necessary to dynamically allocate m_Snippets? Why don't you declared it by value. Avoid a dynamic memory allocation where an instance declared ... code-snippet-manager developer-tools development. Code Snippet was added by kscalvert in Dec 2016 and the latest update was made in Feb 2019. The list of alternatives was updated May 2020. It's possible to update the information on Code Snippet or report it as discontinued, duplicated or spam. If your a coder/programmer and want to make some extra money, specifically Bitcoin, Pastecoin may be for you. Pastecoin is a service that allows anyone to buy or sell code snippets for Bitcoin of course. PasteCoin acts an escrow service, protecting the buyer from bad snippets, and the seller from being scammed. Below is a short video ... Discover the newest code snippets of the Bitcoin universe. Releasing on Sunday. Raspora. Identifying what matters. Breaking down media bubbles, and sharing new perspectives. Join Raspora. Hive. Hive Protocol. The HIVE protocol acts as a “beehive” to connecting information - where each individual adds connections and new information or other media. The main ambition of the protocol is to ...
Bitcoin, Ethereum, WAVES and other cryptocurrencies are growing in value, blockchain is a booming space and many are looking to invest in the crypto space long term. However, purchasing and ... Satoshi Nakamoto appears to be the creator of the code required for the blockchain but its not really certain who it is. Mining the bitcoin takes very powerful computers running 24/7 and uses a ... Welcome back and here are some more articles on Bitcoin you wanted to read. We've hidden the AI articles you never read." We will walk through what needs to happen in the code for this to be ... What is the SDQ? - The SENsible SENCO by SENDCO Solutions. Series of videos designed to give a helping hand to SENDCOs, teachers, governors and managers with... Download Bitcoin Richlist - https://codecanyon.net/item/bitcoin-richlist/21816177?ref=motionstop Bitcoin Richlist Ever wondered how rich the top bitcoin hold...