Unverified Commit 82965a74 authored by Erin Swenson-Healey's avatar Erin Swenson-Healey Committed by GitHub
Browse files

replace old go-sectorbuilder with lotus' sectorbuilder + paramfetch (#61)



* deals: Sending initial proposal works

* deals: Almost sealing client data

* deals: Use temp files for AddPiece

* deals: Upstream bitswap changes

* pond: Basic message display in Block window

* move poller to sector store

* sectorstore: Address review feetback

* storageminer: Initial PaymentVerify implementation

* Wire up more proper ticket generation and verification logic

* Replace most marshaling with codegen

* Command to list sealed blocks

* update sectorbuilder

* Import proofs for paramfetch

* Extract go-fil-proofs

* Fix sectorbuilder poRepProofPartitions

* retrieval: Make types more spec complaiant

* Simpler paramfetch

* Merge commit 'c57c47ffb5695f7536306c4f3ab05c9a98adb1c6' as 'extern/rleplus'

* Add rleplus

License: MIT
Signed-off-by: default avatarJakub Sztandera <kubuxu@protonmail.ch>

* Update sectorbuilder

License: MIT
Signed-off-by: default avatarJakub Sztandera <kubuxu@protonmail.ch>

* Update sectorbuilder

License: MIT
Signed-off-by: default avatarJakub Sztandera <kubuxu@protonmail.ch>

* Sector Commitment tracker

* jsonrpc: include method name in error log

* node: Basic graceful shutdown

* repo: Close datastore in Close

* storageminer: Better context handling

* cleaning up a few types

* Rought PoST method

License: MIT
Signed-off-by: default avatarJakub Sztandera <kubuxu@protonmail.ch>

* update go-sectorbuilder

* use new sectorbuilder file interfaces

* fix tests

* Almost working new post code

* jsonrpc: Channel buffeering

* fix websocket closing

* pass those tests

* fix websocket closing again

* Devnet 3; Builtin bootstrap; NAT Port Map

* remove VDFs from tickets

* use faster bls code

* Update filebeat

Change log of rpc buffer as I want to set up alert when it goes to high

License: MIT
Signed-off-by: default avatarJakub Sztandera <kubuxu@protocol.ai>

* Add more info to storage-miner info command output

* remove empty const block

* Update build scripts

Remove outdated

License: MIT
Signed-off-by: default avatarJakub Sztandera <kubuxu@protocol.ai>

* Cleanup imports after rename

License: MIT
Signed-off-by: default avatarJakub Sztandera <kubuxu@protocol.ai>

* Cleanup imports after rename

License: MIT
Signed-off-by: default avatarJakub Sztandera <kubuxu@protocol.ai>

* dont hang requests if websockets server shuts down

* REST file import endpoint

* on chain deals: Get things to actually run!

* on chain deals: Expose more chain state in pond

* on chain deals: Deals make it to the chain

* on chain deals: Put dealIDs in CommitSector messages

* WIP: updating to new proofs code

* WIP: updating to new proofs code

* should use the new parameters

* very basic sector seal scheduling

* Fix TestSealAndVerify

* storageminer: Handle uncommited sectors on start

* storageminer: Restart sealing on restart

* storageminer: More correct listing of sealed sectors

* fix panic when close miner

* Update sectorbuilder, v15 params

* WIP Interactive PoRep

* Some more progress on interactive porep

* use reflect select

* move select

* specific ipfs gateway

* use IPFS_GATEWAY

* more refactoring for interactive porep scheduling

* Fix sectorbuilder.VerifySeal

* Move statestore to lib

* Get interactive porep sector sealing mostly working

* Get interactive porep sector sealing mostly working

* Strip unused functionality from sectorstore

* statestore: Use reflect for mutators

* statestore: More generic keys

* Use state store for sectors

* Some smaller fixes

* INTERACTIVE PROEP IS ALIVE

* Update sectorbuilder

* Update sectorbuilder

* Put WorkerThreads on sectorbuilder.Config

* rate-limit some sectorbuilder ops

* Track down all the uses of cboripld and eliminate them

* Update go-sectorbuilder again

* events: Plumb context to callbacks

* fix retrieval protocol error by wrapping stream in peeker

* WIP fixing tests

* Fix statestore.List

* Mostly fix deals

* Improve errors around deal handling

* deals: Set correct Refs

* Create filler deals

* WIP: trying to write a test to reproduce the storage deal error

* Add method to query latest deal state

* fail test if deal errors

* deals: cleanup client state machine

* cborrpc -> cborutil

* Make multiple deals per almost work

* update go-sectorbuilder

* sectorbuilder: use standalone methods

* sectorbuilder: Also test PoSt

* sectorbuilder: Always create directories

* Wip fixing a thing

* sectorbuilder: Use StandaloneWriteWithAlignment

* Storage miner API improvements

* keep track of last used sector id across restarts

* Use the same dir in TestAcquireID

* padreader: Some more testcases

* sectorbuilder: Call destroy in DI module

* Update go-sectorbuilder with gpu fixes

* sectorbuilder: apply some review suggestions

* Test to reproduce post error after restart

* Update sectorbuilder with a fix

* Update sectorbuilder

* WorkerCount on storageminer config

* storageminer: Throttle GeneratePieceCommitment in storeGarbage

* more tracing spans

* fix tests and add some more trace attributes

* Skip slow tests

License: MIT
Signed-off-by: default avatarJakub Sztandera <kubuxu@protocol.ai>

* Rename to --include-test-params

License: MIT
Signed-off-by: default avatarJakub Sztandera <kubuxu@protocol.ai>

* wip

* parallel sectorbuilder test

* sectorbuilder: Call AcquireSectorId in sync

* Skip sectorbuilder tests on slow hardware

* StateAPI: account for slashing in StateMinerPower

* sectorbuilder: open FD later in AddPiece

* sectorbuilder: Drop some unused functions

* wip remote sectorbuilder workers

* remote-worker: wire up storage miner endpoints

* support remote SealPreCommit

* Stats for remote workers

* Working remote PreCommit

* WIP remote sector CommitSseal

* WIP: election post restructuring

* WIP: election post restructuring

* fix rspco serialization

* Swtich to xerrors

License: MIT
Signed-off-by: default avatarJakub Sztandera <kubuxu@protocol.ai>

* Add lotus-gen, rewire genesis mining

* Add lotus-gen, rewire genesis mining

* More correct challangeCount calc

* WIP getting post in sectorbuilder_test to work

* use the correct sector sizes in places

* sectorbuilder: Measure thigs in TestSealAndVerify

* WIP trying to get election post to compute

* sectorbuilder: Drop stateful sectorbuilder refs

* sync: correct vrfBase for ticket check

* Copy over last sector ID key when migrating sectorbuilder

* replace go-bls-sigs and go-sectorbuilder with filecoin-ffi

- remove old submodules and add new submodule
- update build tooling to consume new unified static library
- update consumers of old libs to use new package

* replace go-bls-sigs and go-sectorbuilder with filecoin-ffi

- remove old submodules and add new submodule
- update build tooling to consume new unified static library
- update consumers of old libs to use new package

* update filecoin-ffi to v18 params

* update filecoin-ffi to v18 params

* More v18 updates

* v19 parameters

* filecoin-ffi master

* filecoin-ffi master

* WIP: uncomment out windowed post code, try to make it work

* actors: Fallback post progress

* storageminer: New fallback post scheduler

* Use ProvingSet for GetSectorsForElectionPost

* Some fixes and dev utils

* seal-worker: Handle cache

* Rework miner test setups to fix tests

* self review: some cleanup

* Fix unsealing, sector based data refs

* deals: Correctly set deal ID in provider states

* actually set unsealed path in sectorbuilder

* Buch of lint fixes

* use challangeCount as sampleRate in IsTicketWinner

* Update filecoin-ffi

* Update filecoin-ffi

* Update filecoin-ffi

* worker: Use system tar for moving cache around

* worker: Use system tar for moving cache around

* worker: Fix rebaining bugs

* paramfetch: Only pull necessary params

* more statticcheck!

* Update filecoin-ffi

* sectorbuilder: update PoRepProofPartitions

* there is no real correlation between challenge count and len(winners)

* Allow no local sectorbuilder workers

* Fix AddPiece with disabled local workers

* Pre-sealing holes

License: MIT
Signed-off-by: default avatarJakub Sztandera <kubuxu@protocol.ai>

* Update filecoin-ffi

* seed: Trim cache

* Fix tests, circle and make ux nicer
License: MIT
Signed-off-by: default avatarJakub Sztandera <kubuxu@protocol.ai>

* flag blocks that are received too late

* Add lazy RLE+ decoding

License: MIT
Signed-off-by: default avatarJakub Sztandera <kubuxu@protonmail.ch>

* More iterative algorithms

 - Add RunIterator and decoder from RLE
 - Add BitIterator and BitsFromRuns
 - Add BitsFromSlice
 - Add RunsFromBits

License: MIT
Signed-off-by: default avatarJakub Sztandera <kubuxu@protonmail.ch>

* Improve bitvector performance

License: MIT
Signed-off-by: default avatarJakub Sztandera <kubuxu@protonmail.ch>

* Improve benchmarks and fix bitvector iterator

License: MIT
Signed-off-by: default avatarJakub Sztandera <kubuxu@protonmail.ch>

* Add rle encoder

License: MIT
Signed-off-by: default avatarJakub Sztandera <kubuxu@protonmail.ch>

* Optimize and start wrapping it up

License: MIT
Signed-off-by: default avatarJakub Sztandera <kubuxu@protonmail.ch>

* Remove old bitvector

License: MIT
Signed-off-by: default avatarJakub Sztandera <kubuxu@protonmail.ch>

* Improve complex code and comment it

License: MIT
Signed-off-by: default avatarJakub Sztandera <kubuxu@protonmail.ch>

* Replace rleplus with rlepluslazy

License: MIT
Signed-off-by: default avatarJakub Sztandera <kubuxu@protocol.ai>

* Fix typo in overflow check

License: MIT
Signed-off-by: default avatarJakub Sztandera <kubuxu@protocol.ai>

* Some cleanup

License: MIT
Signed-off-by: default avatarJakub Sztandera <kubuxu@protocol.ai>

* sectorbuilder: Allow to restrict task types

License: MIT
Signed-off-by: default avatarJakub Sztandera <kubuxu@protocol.ai>

* sectorbuilder: Allow to restrict task types

License: MIT
Signed-off-by: default avatarJakub Sztandera <kubuxu@protocol.ai>

* Update to correct version

License: MIT
Signed-off-by: default avatarJakub Sztandera <kubuxu@protocol.ai>

* Close files in ExtractTar

* implement sector dir aggregator

* update ffi

* use that nice function i wrote

* this will pretty much always be nil

* support copying directories

* use a package

* Add short tests

License: MIT
Signed-off-by: default avatarJakub Sztandera <kubuxu@protocol.ai>

* Move api struct to a seprate pkg

* fix target for ePoSt IsTicketWinner fn

License: MIT
Signed-off-by: default avatarJakub Sztandera <kubuxu@protocol.ai>

* fix sync tests

* Update FFI

* add option to symlink to presealed sectors

* fixup

* sectorbuilder: Fix proving on RO filesystem

* Update filecoin-ffi

* use actual symlink flag

* sectors: Handle sector state reload errors more gracefully

* Use filecoin-ffi master

* Update ffi to f261762

* sectorbuilder: check free space before creating sectors

* sectorbuilder: fs: address review

* fix(sectorbuilder): always cast fsstat.Bsize

fixes compilation issue under macos

* sectorbuilder: fix getpath

* sectorbuilder: Improve not enough space error

* circle: buildall on macos

* Wire up faults in fPoSt

* tear the world asunder

* temporarily move build into lib to prepare for extraction

* consume sectorbuilder from lotus

* port sectorbuilder from lotus

* downgrade to go-datastore 0.1.1 to match lotus
Co-authored-by: default avatarŁukasz Magiera <magik6k@users.noreply.github.com>
Co-authored-by: default avatarWhyrusleeping <why@ipfs.io>
Co-authored-by: default avatarJakub Sztandera <kubuxu@protonmail.ch>
Co-authored-by: default avatarFrank <wholery@163.com>
Co-authored-by: default avatarJack Yao <yaoh.cn@gmail.com>
Co-authored-by: default avatarHenri <3359083+sternhenri@users.noreply.github.com>
Co-authored-by: default avatarCaesar Wang <dtynn@163.com>
Co-authored-by: default avatarFriedel Ziegelmayer <me@dignifiedquire.com>
parent 4c9919a1
version: 2.1
jobs:
build_and_test_linux:
docker:
- image: circleci/golang:1.12.1-stretch
working_directory: ~/go/src/github.com/filecoin-project/go-sectorbuilder
resource_class: 2xlarge
steps:
- configure_environment_variables
- run:
name: Install Rust toolchain
command: |
(sudo apt-get update && sudo apt-get install -y ocl-icd-opencl-dev clang libssl-dev && which cargo && which rustc) || (curl https://sh.rustup.rs -sSf | sh -s -- -y)
rustc --version
- run:
name: Install jq
command: |
sudo apt-get update
sudo apt-get install -y jq
jq --version
- checkout
- update_submodules
- build_project
- lint_project
- restore_parameter_cache
- obtain_filecoin_parameters
- save_parameter_cache
- build_and_run_tests
build_and_test_darwin:
macos:
xcode: "10.0.0"
working_directory: ~/go/src/github.com/filecoin-project/go-sectorbuilder
resource_class: large
steps:
- configure_environment_variables
- run:
name: Install go
command: |
curl https://dl.google.com/go/go1.12.1.darwin-amd64.pkg -o /tmp/go.pkg && \
sudo installer -pkg /tmp/go.pkg -target /
go version
- run:
name: Install pkg-config and md5sum
command: HOMEBREW_NO_AUTO_UPDATE=1 brew install pkg-config md5sha1sum
- run:
name: Install Rust toolchain
command: |
curl https://sh.rustup.rs -sSf | sh -s -- -y
rustc --version
- run:
name: Install jq
command: |
HOMEBREW_NO_AUTO_UPDATE=1 brew install jq
jq --version
- checkout
- update_submodules
- build_project
- lint_project
- restore_parameter_cache
- obtain_filecoin_parameters
- save_parameter_cache
- build_and_compile_tests
workflows:
version: 2
test_all:
jobs:
- build_and_test_linux
- build_and_test_darwin
commands:
configure_environment_variables:
steps:
- run:
name: Configure environment variables
command: |
echo 'export PATH="/usr/local/go/bin:${HOME}/.cargo/bin:${PATH}:${HOME}/go/bin:${HOME}/.bin"' >> $BASH_ENV
echo 'export GOPATH="${HOME}/go"' >> $BASH_ENV
echo 'export FIL_PROOFS_PARAMETER_CACHE="${HOME}/filecoin-proof-parameters/"' >> $BASH_ENV
echo 'export GO111MODULE=on' >> $BASH_ENV
echo 'export RUST_LOG=info' >> $BASH_ENV
obtain_filecoin_parameters:
steps:
- run:
name: Obtain filecoin groth parameters
command: ./paramcache --params-for-sector-sizes=1024
no_output_timeout: 30m
update_submodules:
steps:
- run:
name: Update submodules
command: git submodule update --init --recursive
build_project:
steps:
- run:
name: Build project
command: make
- run:
name: Ensure paramcache is installed to project root
command: |
test -f ./paramcache \
|| (rustup run --install nightly cargo install filecoin-proofs --force --git=https://github.com/filecoin-project/rust-fil-proofs.git --branch=master --bin=paramcache --root=./ \
&& mv ./bin/paramcache ./paramcache)
lint_project:
steps:
- run:
name: Lint project
command: go run github.com/golangci/golangci-lint/cmd/golangci-lint run
build_and_run_tests:
steps:
- run:
name: Test project
command: RUST_LOG=info go test -p 1 -timeout 60m
no_output_timeout: 60m
build_and_compile_tests:
steps:
- run:
name: Build project and tests, but don't actually run the tests (used to verify that build/link works with Darwin)
command: RUST_LOG=info go test -run=^$
restore_parameter_cache:
steps:
- restore_cache:
keys:
- v17-proof-params-{{ arch }}
save_parameter_cache:
steps:
- save_cache:
key: v17-proof-params-{{ arch }}
paths:
- "~/filecoin-proof-parameters/"
**/*.h
**/*.a
**/*.pc
.install-rust-fil-sector-builder
bin/
.crates.toml
**/paramcache
build/.filecoin-ffi-install
build/.update-submodules
\ No newline at end of file
[submodule "rust-fil-sector-builder"]
path = rust-fil-sector-builder
url = https://github.com/filecoin-project/rust-fil-sector-builder
[submodule "extern/filecoin-ffi"]
path = extern/filecoin-ffi
url = git@github.com:filecoin-project/filecoin-ffi
branch = master
Copyright (c) 2019 Filecoin Project
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
The MIT License (MIT)
Copyright (c) 2019 Filecoin Project
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
DEPS:=sector_builder_ffi.h sector_builder_ffi.pc libsector_builder_ffi.a
SHELL=/usr/bin/env bash
all: $(DEPS)
all: build
.PHONY: all
# git submodules that need to be loaded
SUBMODULES:=
$(DEPS): .install-rust-fil-sector-builder ;
# things to clean up, e.g. libfilecoin.a
CLEAN:=
.install-rust-fil-sector-builder: rust-fil-sector-builder
./install-rust-fil-sector-builder
FFI_PATH:=extern/filecoin-ffi/
FFI_DEPS:=libfilecoin.a filecoin.pc filecoin.h
FFI_DEPS:=$(addprefix $(FFI_PATH),$(FFI_DEPS))
$(FFI_DEPS): build/.filecoin-ffi-install ;
# dummy file that marks the last time the filecoin-ffi project was built
build/.filecoin-ffi-install: $(FFI_PATH)
$(MAKE) -C $(FFI_PATH) $(FFI_DEPS:$(FFI_PATH)%=%)
@touch $@
SUBMODULES+=$(FFI_PATH)
BUILD_DEPS+=build/.filecoin-ffi-install
CLEAN+=build/.filecoin-ffi-install
$(SUBMODULES): build/.update-submodules ;
# dummy file that marks the last time submodules were updated
build/.update-submodules:
git submodule update --init --recursive
touch $@
CLEAN+=build/.update-submodules
# build and install any upstream dependencies, e.g. filecoin-ffi
deps: $(BUILD_DEPS)
.PHONY: deps
test: $(BUILD_DEPS)
go test -v $(GOFLAGS) ./...
.PHONY: test
lint: $(BUILD_DEPS)
golangci-lint run -v --concurrency 2 --new-from-rev origin/master
.PHONY: lint
build: $(BUILD_DEPS)
go build -v $(GOFLAGS) ./...
.PHONY: build
clean:
rm -rf $(DEPS) .install-rust-fil-sector-builder
rm -rf $(CLEAN)
-$(MAKE) -C $(FFI_PATH) clean
.PHONY: clean
# go-sectorbuilder
> Go bindings for the Filecoin Sector Builder
## Building
> make
This diff is collapsed.
package go_sectorbuilder_test
import (
"bytes"
"crypto/rand"
"encoding/hex"
"errors"
"fmt"
"github.com/stretchr/testify/assert"
"io"
"io/ioutil"
"math/big"
"os"
"path/filepath"
"testing"
"time"
"unsafe"
sb "github.com/filecoin-project/go-sectorbuilder"
"github.com/filecoin-project/go-sectorbuilder/sealed_sector_health"
"github.com/filecoin-project/go-sectorbuilder/sealing_state"
"github.com/stretchr/testify/require"
)
func TestSectorBuilderLifecycle(t *testing.T) {
ticketA := sb.SealTicket{
BlockHeight: 0,
TicketBytes: [32]byte{5, 4, 2},
}
ticketB := sb.SealTicket{
BlockHeight: 10,
TicketBytes: [32]byte{1, 2, 3},
}
seedA := sb.SealSeed{
BlockHeight: 50,
TicketBytes: [32]byte{7, 4, 2},
}
seedB := sb.SealSeed{
BlockHeight: 60,
TicketBytes: [32]byte{9, 10, 11},
}
proverID := [32]byte{6, 7, 8}
metadataDir := requireTempDirPath(t, "metadata")
defer os.RemoveAll(metadataDir)
sealedSectorDir := requireTempDirPath(t, "sealed-sectors")
defer os.RemoveAll(sealedSectorDir)
stagedSectorDir := requireTempDirPath(t, "staged-sectors")
defer os.RemoveAll(stagedSectorDir)
sectorCacheRootDir := requireTempDirPath(t, "sector-cache-root")
defer os.RemoveAll(sectorCacheRootDir)
ptr, err := sb.InitSectorBuilder(1024, 2, 0, metadataDir, proverID, sealedSectorDir, stagedSectorDir, sectorCacheRootDir, 1, 2)
require.NoError(t, err)
defer sb.DestroySectorBuilder(ptr)
// verify that we've not yet sealed a sector
sealedSectors, err := sb.GetAllSealedSectorsWithHealth(ptr)
require.NoError(t, err)
require.Equal(t, 0, len(sealedSectors), "expected to see zero sealed sectors")
// compute the max user-bytes that can fit into a staged sector such that
// bit-padding ("preprocessing") expands the file to $SECTOR_SIZE
maxPieceSize := sb.GetMaxUserBytesPerStagedSector(1024)
// create a piece which consumes all available space in a new, staged
// sector
pieceBytes := make([]byte, maxPieceSize)
read, err := io.ReadFull(rand.Reader, pieceBytes)
require.Equal(t, uint64(read), maxPieceSize)
require.NoError(t, err)
pieceFileA := requireTempFile(t, bytes.NewReader(pieceBytes), maxPieceSize)
require.NoError(t, err)
pieceFileB := requireTempFile(t, bytes.NewReader(pieceBytes), maxPieceSize)
// generate piece commitment
commP, err := sb.GeneratePieceCommitmentFromFile(pieceFileA, maxPieceSize)
require.NoError(t, err)
publicPieceInfoA := []sb.PublicPieceInfo{{
Size: maxPieceSize,
CommP: commP,
}}
preComputedCommD, err := sb.GenerateDataCommitment(1024, publicPieceInfoA)
require.NoError(t, err)
// seek to the beginning
_, err = pieceFileA.Seek(0, 0)
require.NoError(t, err)
// write a piece to a staged sector, reducing remaining space to 0
sectorIDA, err := sb.AddPieceFromFile(ptr, "snoqualmie", maxPieceSize, pieceFileA)
require.NoError(t, err)
stagedSectors, err := sb.GetAllStagedSectors(ptr)
require.NoError(t, err)
require.Equal(t, 1, len(stagedSectors))
stagedSector := stagedSectors[0]
require.Equal(t, uint64(1), stagedSector.SectorID)
// block until the sector is ready for us to begin sealing
statusA, err := pollForSectorSealingStatus(ptr, sectorIDA, sealing_state.FullyPacked, time.Minute)
require.NoError(t, err)
// pre-commit sector to a ticket (in a non-blocking fashion)
go func() {
out, err := sb.SealPreCommit(ptr, statusA.SectorID, ticketA)
require.NoError(t, err)
require.Equal(t, sectorIDA, out.SectorID)
require.Equal(t, ticketA.TicketBytes, out.Ticket.TicketBytes)
require.True(t, bytes.Equal(preComputedCommD[:], out.CommD[:]))
}()
// write a second piece to a staged sector, reducing remaining space to 0
sectorIDB, err := sb.AddPieceFromFile(ptr, "duvall", maxPieceSize, pieceFileB)
require.NoError(t, err)
// pre-commit second sector to a ticket too
go func() {
_, err := sb.SealPreCommit(ptr, sectorIDB, ticketB)
require.NoError(t, err)
}()
// block until both sectors have successfully pre-committed
statusA, err = pollForSectorSealingStatus(ptr, sectorIDA, sealing_state.PreCommitted, 30*time.Minute)
require.NoError(t, err)
statusB, err := pollForSectorSealingStatus(ptr, sectorIDB, sealing_state.PreCommitted, 30*time.Minute)
require.NoError(t, err)
// commit both sectors concurrently
go func() {
out, err := sb.SealCommit(ptr, sectorIDA, seedA)
require.NoError(t, err)
require.Equal(t, sectorIDA, out.SectorID)
require.Equal(t, ticketA.TicketBytes, out.Ticket.TicketBytes)
require.Equal(t, seedA.TicketBytes, out.Seed.TicketBytes)
}()
go func() {
out, err := sb.SealCommit(ptr, sectorIDB, seedB)
require.NoError(t, err)
require.Equal(t, sectorIDB, out.SectorID)
}()
// block until both sectors have finished sealing (successfully)
statusA, err = pollForSectorSealingStatus(ptr, sectorIDA, sealing_state.Committed, 30*time.Minute)
require.NoError(t, err)
statusB, err = pollForSectorSealingStatus(ptr, sectorIDB, sealing_state.Committed, 30*time.Minute)
require.NoError(t, err)
// verify that we used the tickets and seeds we'd intended to use
require.Equal(t, ticketA.TicketBytes, statusA.Ticket.TicketBytes)
require.Equal(t, ticketB.TicketBytes, statusB.Ticket.TicketBytes)
require.Equal(t, seedA.TicketBytes, statusA.Seed.TicketBytes)
require.Equal(t, seedB.TicketBytes, statusB.Seed.TicketBytes)
// verify the seal proof
isValid, err := sb.VerifySeal(1024, statusA.CommR, statusA.CommD, proverID, ticketA.TicketBytes, seedA.TicketBytes, sectorIDA, statusA.Proof)
require.NoError(t, err)
require.True(t, isValid)
// enforces sort ordering of PublicSectorInfo tuples
sectorInfo := sb.NewSortedPublicSectorInfo(sb.PublicSectorInfo{
SectorID: statusA.SectorID,
CommR: statusA.CommR,
})
candidates, err := sb.GenerateCandidates(ptr, sectorInfo, [32]byte{}, 2, []uint64{})
require.NoError(t, err)
// generate a PoSt
proofs, err := sb.GeneratePoSt(ptr, sectorInfo, [32]byte{}, 2, candidates)
require.NoError(t, err)
// verify the PoSt
isValid, err = sb.VerifyPoSt(1024, sectorInfo, [32]byte{}, 2, proofs, candidates, proverID)
require.NoError(t, err)
require.True(t, isValid)
sealedSectors, err = sb.GetAllSealedSectorsWithHealth(ptr)
require.NoError(t, err)
require.Equal(t, 2, len(sealedSectors), "expected to see two sealed sectors")
for _, sealedSector := range sealedSectors {
require.Equal(t, sealed_sector_health.Ok, sealedSector.Health)
}
// both sealed sectors contain the same data, so either will suffice
require.Equal(t, commP, sealedSectors[0].CommD)
// unseal the sector and retrieve the client's piece, verifying that the
// retrieved bytes match what we originally wrote to the staged sector
unsealedPieceBytes, err := sb.ReadPieceFromSealedSector(ptr, "snoqualmie")
require.NoError(t, err)
require.Equal(t, pieceBytes, unsealedPieceBytes)
}
func TestImportSector(t *testing.T) {
challengeCount := uint64(2)
poRepProofPartitions := uint8(2)
proverID := [32]byte{6, 7, 8}
randomness := [32]byte{9, 9, 9}
sectorSize := uint64(1024)
ticket := sb.SealTicket{
BlockHeight: 0,
TicketBytes: [32]byte{5, 4, 2},
}
seed := sb.SealSeed{
BlockHeight: 50,
TicketBytes: [32]byte{7, 4, 2},
}
// initialize a sector builder
metadataDir := requireTempDirPath(t, "metadata")
defer os.RemoveAll(metadataDir)
sealedSectorsDir := requireTempDirPath(t, "sealed-sectors")
defer os.RemoveAll(sealedSectorsDir)
stagedSectorsDir := requireTempDirPath(t, "staged-sectors")
defer os.RemoveAll(stagedSectorsDir)
sectorCacheRootDir := requireTempDirPath(t, "sector-cache-root-dir")
defer os.RemoveAll(sectorCacheRootDir)
ptr, err := sb.InitSectorBuilder(sectorSize, 2, 0, metadataDir, proverID, sealedSectorsDir, stagedSectorsDir, sectorCacheRootDir, 1, 1)
require.NoError(t, err)
defer sb.DestroySectorBuilder(ptr)
sectorID, err := sb.AcquireSectorId(ptr)
require.NoError(t, err)
sectorCacheDirPath := requireTempDirPath(t, "sector-cache-dir")
defer os.RemoveAll(sectorCacheDirPath)
stagedSectorFile := requireTempFile(t, bytes.NewReader([]byte{}), 0)
defer stagedSectorFile.Close()
sealedSectorFile := requireTempFile(t, bytes.NewReader([]byte{}), 0)
defer sealedSectorFile.Close()
unsealOutputFile := requireTempFile(t, bytes.NewReader([]byte{}), 0)
defer unsealOutputFile.Close()
// some rando bytes
someBytes := make([]byte, 1016)
_, err = io.ReadFull(rand.Reader, someBytes)
require.NoError(t, err)
// write first piece
require.NoError(t, err)
pieceFileA := requireTempFile(t, bytes.NewReader(someBytes[0:127]), 127)
commPA, err := sb.GeneratePieceCommitmentFromFile(pieceFileA, 127)
require.NoError(t, err)
// seek back to head (generating piece commitment moves offset)
_, err = pieceFileA.Seek(0, 0)
require.NoError(t, err)
// write the first piece using the alignment-free function
n, commP, err := sb.StandaloneWriteWithoutAlignment(pieceFileA, 127, stagedSectorFile)
require.NoError(t, err)
require.Equal(t, int(n), 127)
require.Equal(t, commP, commPA)
// write second piece + alignment
require.NoError(t, err)
pieceFileB := requireTempFile(t, bytes.NewReader(someBytes[0:508]), 508)
commPB, err := sb.GeneratePieceCommitmentFromFile(pieceFileB, 508)
require.NoError(t, err)
// seek back to head
_, err = pieceFileB.Seek(0, 0)
require.NoError(t, err)
// second piece relies on the alignment-computing version
left, tot, commP, err := sb.StandaloneWriteWithAlignment(pieceFileB, 508, stagedSectorFile, []uint64{127})
require.NoError(t, err)
require.Equal(t, int(left), 381)
require.Equal(t, int(tot), 889)
require.Equal(t, commP, commPB)
publicPieces := []sb.PublicPieceInfo{{
Size: 127,
CommP: commPA,
}, {
Size: 508,
CommP: commPB,
}}
privatePieces := make([]sb.PieceMetadata, len(publicPieces))
for i, v := range publicPieces {
privatePieces[i] = sb.PieceMetadata{
Key: hex.EncodeToString(v.CommP[:]),
Size: v.Size,
CommP: v.CommP,
}
}
// pre-commit the sector
output, err := sb.StandaloneSealPreCommit(sectorSize, poRepProofPartitions, sectorCacheDirPath, stagedSectorFile.Name(), sealedSectorFile.Name(), sectorID, proverID, ticket.TicketBytes, publicPieces)
require.NoError(t, err)
// commit the sector
proof, err := sb.StandaloneSealCommit(sectorSize, poRepProofPartitions, sectorCacheDirPath, sectorID, proverID, ticket.TicketBytes, seed.TicketBytes, publicPieces, output)
require.NoError(t, err)
// verify the 'ole proofy
isValid, err := sb.VerifySeal(sectorSize, output.CommR, output.CommD, proverID, ticket.TicketBytes, seed.TicketBytes, sectorID, proof)
require.NoError(t, err)
require.True(t, isValid, "proof wasn't valid")
// unseal and verify that things went as we planned
require.NoError(t, sb.StandaloneUnseal(sectorSize, poRepProofPartitions, sectorCacheDirPath, sealedSectorFile.Name(), unsealOutputFile.Name(), sectorID, proverID, ticket.TicketBytes, output.CommD))
contents, err := ioutil.ReadFile(unsealOutputFile.Name())
require.NoError(t, err)
// unsealed sector includes a bunch of alignment NUL-bytes
alignment := make([]byte, 381)
// verify that we unsealed what we expected to unseal
require.Equal(t, someBytes[0:127], contents[0:127])
require.Equal(t, alignment, contents[127:508])
require.Equal(t, someBytes[0:508], contents[508:1016])
// verify that the sector builder owns no sealed sectors
var sealedSectorPaths []string
require.NoError(t, filepath.Walk(sealedSectorsDir, visit(&sealedSectorPaths)))
assert.Equal(t, 1, len(sealedSectorPaths), sealedSectorPaths)
// no sector cache dirs, either
var sectorCacheDirPaths []string
require.NoError(t, filepath.Walk(sectorCacheRootDir, visit(&sectorCacheDirPaths)))
assert.Equal(t, 1, len(sectorCacheDirPaths), sectorCacheDirPaths)
// generate a PoSt over the proving set before importing, just to exercise
// the new API
privateInfo := sb.NewSortedPrivateSectorInfo(sb.PrivateSectorInfo{
SectorID: sectorID,
CommR: output.CommR,
CacheDirPath: sectorCacheDirPath,
SealedSectorPath: sealedSectorFile.Name(),
})
publicInfo := sb.NewSortedPublicSectorInfo(sb.PublicSectorInfo{
SectorID: sectorID,
CommR: output.CommR,
})
candidatesA, err := sb.StandaloneGenerateCandidates(sectorSize, proverID, randomness, challengeCount, privateInfo)
require.NoError(t, err)
proofA, err := sb.StandaloneGeneratePoSt(sectorSize, proverID, privateInfo, randomness, candidatesA)
require.NoError(t, err)
isValid, err = sb.VerifyPoSt(sectorSize, publicInfo, randomness, challengeCount, proofA, candidatesA, proverID)
require.NoError(t, err)
require.True(t, isValid, "VerifyPoSt rejected the (standalone) proof as invalid")
// import the sealed sector, transferring ownership to the sector builder
err = sb.ImportSealedSector(ptr, sectorID, sectorCacheDirPath, sealedSectorFile.Name(), ticket, seed, output.CommR, output.CommD, output.CommC, output.CommRLast, proof, privatePieces)
require.NoError(t, err)
// it should now have a sealed sector!
var sealedSectorPathsB []string
require.NoError(t, filepath.Walk(sealedSectorsDir, visit(&sealedSectorPathsB)))
assert.Equal(t, 2, len(sealedSectorPathsB), sealedSectorPathsB)
// it should now have a cache dir and a bunch of goodies in the cache
var sectorCacheDirPathsB []string
require.NoError(t, filepath.Walk(sectorCacheRootDir, visit(&sectorCacheDirPathsB)))
assert.Less(t, 2, len(sectorCacheDirPathsB), sectorCacheDirPathsB)
// verify that it shows up in sealed sector list
metadata, err := sb.GetAllSealedSectorsWithHealth(ptr)
require.NoError(t, err)
require.Equal(t, 1, len(metadata))
require.Equal(t, output.CommD, metadata[0].CommD)
require.Equal(t, output.CommR, metadata[0].CommR)
candidatesB, err := sb.GenerateCandidates(ptr, publicInfo, randomness, challengeCount, []uint64{})
require.NoError(t, err)
require.Less(t, 0, len(candidatesB))
// finalize the ticket, but don't do anything with the results (simply
// exercise the API)
_, err = sb.FinalizeTicket(candidatesB[0].PartialTicket)
require.NoError(t, err)
proofB, err := sb.GeneratePoSt(ptr, publicInfo, randomness, challengeCount, candidatesB)
require.NoError(t, err)
isValid, err = sb.VerifyPoSt(sectorSize, publicInfo, randomness, challengeCount, proofB, candidatesB, proverID)
require.NoError(t, err)
require.True(t, isValid, "VerifyPoSt rejected the proof as invalid")
}
func TestJsonMarshalSymmetry(t *testing.T) {
for i := 0; i < 100; i++ {
xs := make([]sb.PublicSectorInfo, 10)
for j := 0; j < 10; j++ {
var x sb.PublicSectorInfo
_, err := io.ReadFull(rand.Reader, x.CommR[:])
require.NoError(t, err)
n, err := rand.Int(rand.Reader, big.NewInt(500))
require.NoError(t, err)
x.SectorID = n.Uint64()
xs[j] = x
}
toSerialize := sb.NewSortedPublicSectorInfo(xs...)
serialized, err := toSerialize.MarshalJSON()
require.NoError(t, err)
var fromSerialized sb.SortedPublicSectorInfo
err = fromSerialized.UnmarshalJSON(serialized)
require.NoError(t, err)
require.Equal(t, toSerialize, fromSerialized)
}
}
func pollForSectorSealingStatus(ptr unsafe.Pointer, sectorID uint64, targetState sealing_state.State, timeout time.Duration) (status sb.SectorSealingStatus, retErr error) {
timeoutCh := time.After(timeout)
lastState := sealing_state.Unknown
tick := time.Tick(1 * time.Second)
for {
select {
case <-timeoutCh:
retErr = fmt.Errorf("timed out waiting for sector hit desired state (last state: %s)", lastState)
return
case <-tick:
sealingStatus, err := sb.GetSectorSealingStatusByID(ptr, sectorID)
if err != nil {
retErr = err
return
}
lastState = sealingStatus.State
if sealingStatus.State == targetState {
status = sealingStatus
return
} else if sealingStatus.State == sealing_state.Failed {
retErr = errors.New(sealingStatus.SealErrorMsg)
return
}
}
}
}
func requireTempFile(t *testing.T, fileContentsReader io.Reader, size uint64) *os.File {
file, err := ioutil.TempFile("", "")
require.NoError(t, err)
written, err := io.Copy(file, fileContentsReader)
require.NoError(t, err)
// check that we wrote everything
require.Equal(t, int(size), int(written))
require.NoError(t, file.Sync())
// seek to the beginning
_, err = file.Seek(0, 0)
require.NoError(t, err)
return file
}
func requireTempDirPath(t *testing.T, prefix string) string {
dir, err := ioutil.TempDir("", prefix)
require.NoError(t, err)
return dir
}
func visit(paths *[]string) filepath.WalkFunc {
return func(path string, info os.FileInfo, err error) error {
if err != nil {
panic(err)
}
*paths = append(*paths, path)
return nil
}
}
Subproject commit bb699517a5904b3d2549ac97e2b0005ab6471dce
package sectorbuilder
import (
"fmt"
"io"
"io/ioutil"
"os"
"path/filepath"
"strings"
"sync"
"golang.org/x/xerrors"
)
func (sb *SectorBuilder) SectorName(sectorID uint64) string {
return fmt.Sprintf("s-%s-%d", sb.Miner, sectorID)
}
func (sb *SectorBuilder) StagedSectorPath(sectorID uint64) string {
return filepath.Join(sb.filesystem.pathFor(dataStaging), sb.SectorName(sectorID))
}
func (sb *SectorBuilder) unsealedSectorPath(sectorID uint64) string {
return filepath.Join(sb.filesystem.pathFor(dataUnsealed), sb.SectorName(sectorID))
}
func (sb *SectorBuilder) stagedSectorFile(sectorID uint64) (*os.File, error) {
return os.OpenFile(sb.StagedSectorPath(sectorID), os.O_RDWR|os.O_CREATE, 0644)
}
func (sb *SectorBuilder) SealedSectorPath(sectorID uint64) (string, error) {
path := filepath.Join(sb.filesystem.pathFor(dataSealed), sb.SectorName(sectorID))
return path, nil
}
func (sb *SectorBuilder) sectorCacheDir(sectorID uint64) (string, error) {
dir := filepath.Join(sb.filesystem.pathFor(dataCache), sb.SectorName(sectorID))
err := os.Mkdir(dir, 0755)
if os.IsExist(err) {
err = nil
}
return dir, err
}
func (sb *SectorBuilder) GetPath(typ string, sectorName string) (string, error) {
_, found := overheadMul[dataType(typ)]
if !found {
return "", xerrors.Errorf("unknown sector type: %s", typ)
}
return filepath.Join(sb.filesystem.pathFor(dataType(typ)), sectorName), nil
}
func (sb *SectorBuilder) TrimCache(sectorID uint64) error {
dir, err := sb.sectorCacheDir(sectorID)
if err != nil {
return xerrors.Errorf("getting cache dir: %w", err)
}
files, err := ioutil.ReadDir(dir)
if err != nil {
return xerrors.Errorf("readdir: %w", err)
}
for _, file := range files {
if !strings.HasSuffix(file.Name(), ".dat") { // _aux probably
continue
}
if strings.HasSuffix(file.Name(), "-data-tree-r-last.dat") { // Want to keep
continue
}
if err := os.Remove(filepath.Join(dir, file.Name())); err != nil {
return xerrors.Errorf("rm %s: %w", file.Name(), err)
}
}
return nil
}
func toReadableFile(r io.Reader, n int64) (*os.File, func() error, error) {
f, ok := r.(*os.File)
if ok {
return f, func() error { return nil }, nil
}
var w *os.File
f, w, err := os.Pipe()
if err != nil {
return nil, nil, err
}
var wait sync.Mutex
var werr error
wait.Lock()
go func() {
defer wait.Unlock()
var copied int64
copied, werr = io.CopyN(w, r, n)
if werr != nil {
log.Warnf("toReadableFile: copy error: %+v", werr)
}
err := w.Close()
if werr == nil && err != nil {
werr = err
log.Warnf("toReadableFile: close error: %+v", err)
return
}
if copied != n {
log.Warnf("copied different amount than expected: %d != %d", copied, n)
werr = xerrors.Errorf("copied different amount than expected: %d != %d", copied, n)
}
}()
return f, func() error {
wait.Lock()
return werr
}, nil
}
package sectorbuilder
import (
"os"
"path/filepath"
"sync"
"syscall"
"golang.org/x/xerrors"
)
type dataType string
const (
dataCache dataType = "cache"
dataStaging dataType = "staging"
dataSealed dataType = "sealed"
dataUnsealed dataType = "unsealed"
)
var overheadMul = map[dataType]uint64{ // * sectorSize
dataCache: 11, // TODO: check if true for 32G sectors
dataStaging: 1,
dataSealed: 1,
dataUnsealed: 1,
}
type fs struct {
path string
// in progress actions
reserved map[dataType]uint64
lk sync.Mutex
}
func openFs(dir string) *fs {
return &fs{
path: dir,
reserved: map[dataType]uint64{},
}
}
func (f *fs) init() error {
for _, dir := range []string{f.path,
f.pathFor(dataCache),
f.pathFor(dataStaging),
f.pathFor(dataSealed),
f.pathFor(dataUnsealed)} {
if err := os.Mkdir(dir, 0755); err != nil {
if os.IsExist(err) {
continue
}
return err
}
}
return nil
}
func (f *fs) pathFor(typ dataType) string {
_, found := overheadMul[typ]
if !found {
panic("unknown data path requested")
}
return filepath.Join(f.path, string(typ))
}
func (f *fs) reservedBytes() int64 {
var out int64
for _, r := range f.reserved {
out += int64(r)
}
return out
}
func (f *fs) reserve(typ dataType, size uint64) error {
f.lk.Lock()
defer f.lk.Unlock()
var fsstat syscall.Statfs_t
if err := syscall.Statfs(f.pathFor(typ), &fsstat); err != nil {
return err
}
fsavail := int64(fsstat.Bavail) * int64(fsstat.Bsize)
avail := fsavail - f.reservedBytes()
need := overheadMul[typ] * size
if int64(need) > avail {
return xerrors.Errorf("not enough space in '%s', need %dB, available %dB (fs: %dB, reserved: %dB)",
f.path,
need,
avail,
fsavail,
f.reservedBytes())
}
f.reserved[typ] += need
return nil
}
func (f *fs) free(typ dataType, sectorSize uint64) {
f.lk.Lock()
defer f.lk.Unlock()
f.reserved[typ] -= overheadMul[typ] * sectorSize
}
module github.com/filecoin-project/go-sectorbuilder
go 1.12
go 1.13
require (
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/ipfs/go-log v0.0.1
github.com/kr/pretty v0.1.0 // indirect
github.com/mattn/go-colorable v0.1.4 // indirect
github.com/pkg/errors v0.8.1
github.com/GeertJohan/go.rice v1.0.0
github.com/fatih/color v1.7.0 // indirect
github.com/filecoin-project/filecoin-ffi v0.0.0-20191219131535-bb699517a590
github.com/filecoin-project/go-address v0.0.0-20191219011437-af739c490b4f
github.com/gogo/protobuf v1.3.1 // indirect
github.com/gopherjs/gopherjs v0.0.0-20190812055157-5d271430af9f // indirect
github.com/ipfs/go-cid v0.0.4 // indirect
github.com/ipfs/go-datastore v0.1.1
github.com/ipfs/go-ipld-format v0.0.2 // indirect
github.com/ipfs/go-log v1.0.0
github.com/jbenet/goprocess v0.1.3 // indirect
github.com/mattn/go-colorable v0.1.2 // indirect
github.com/mattn/go-isatty v0.0.9 // indirect
github.com/mattn/go-runewidth v0.0.4 // indirect
github.com/minio/blake2b-simd v0.0.0-20160723061019-3f5f724cb5b1
github.com/minio/sha256-simd v0.1.1 // indirect
github.com/mr-tron/base58 v1.1.3 // indirect
github.com/otiai10/copy v1.0.2
github.com/polydawn/refmt v0.0.0-20190809202753-05966cbd336a // indirect
github.com/smartystreets/assertions v1.0.1 // indirect
github.com/smartystreets/goconvey v0.0.0-20190731233626-505e41936337 // indirect
github.com/stretchr/testify v1.4.0
golang.org/x/net v0.0.0-20190923162816-aa69164e4478 // indirect
golang.org/x/sys v0.0.0-20190922100055-0a153f010e69 // indirect
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 // indirect
gopkg.in/yaml.v2 v2.2.4 // indirect
github.com/warpfork/go-wish v0.0.0-20190328234359-8b3e70f8e830 // indirect
go.opencensus.io v0.22.2
go.uber.org/multierr v1.4.0
golang.org/x/crypto v0.0.0-20191206172530-e9b2fee46413 // indirect
golang.org/x/sys v0.0.0-20191210023423-ac6580df4449 // indirect
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543
gopkg.in/cheggaaa/pb.v1 v1.0.28
)
replace github.com/golangci/golangci-lint => github.com/golangci/golangci-lint v1.18.0
replace github.com/filecoin-project/filecoin-ffi => ./extern/filecoin-ffi
This diff is collapsed.
#!/usr/bin/env bash
set -Eeo pipefail
cd "$(dirname "${BASH_SOURCE[0]}")"
source "install-shared.bash"
subm_dir="rust-fil-sector-builder"
git submodule update --init --recursive $subm_dir
if [ "${SB_BUILD_FROM_SOURCE}" != "1" ] && download_release_tarball tarball_path "${subm_dir}"; then
tmp_dir=$(mktemp -d)
tar -C "$tmp_dir" -xzf "$tarball_path"
cp "${tmp_dir}/include/sector_builder_ffi.h" .
cp "${tmp_dir}/lib/libsector_builder_ffi.a" .
cp "${tmp_dir}/lib/pkgconfig/sector_builder_ffi.pc" .
cp "${tmp_dir}/bin/paramcache" .
(>&2 echo "successfully installed prebuilt libsector_builder")
else
(>&2 echo "building libsector_builder from local sources (dir = ${subm_dir})")
build_from_source "${subm_dir}"
mkdir -p include
mkdir -p lib/pkgconfig
find "${subm_dir}/target/release" -type f -name sector_builder_ffi.h -exec cp -- "{}" . \;
find "${subm_dir}/target/release" -type f -name libsector_builder_ffi.a -exec cp -- "{}" . \;
find "${subm_dir}" -type f -name sector_builder_ffi.pc -exec cp -- "{}" . \;
if [[ ! -f "./sector_builder_ffi.h" ]]; then
(>&2 echo "failed to install sector_builder_ffi.h")
exit 1
fi
if [[ ! -f "./libsector_builder_ffi.a" ]]; then
(>&2 echo "failed to install libsector_builder_ffi.a")
exit 1
fi
if [[ ! -f "./sector_builder_ffi.pc" ]]; then
(>&2 echo "failed to install sector_builder_ffi.pc")
exit 1
fi
(>&2 echo "WARNING: paramcache was not installed - you may wish to 'cargo install' it")
(>&2 echo "successfully built and installed libsector_builder from source")
fi
#!/usr/bin/env bash
download_release_tarball() {
__resultvar=$1
__submodule_path=$2
__repo_name=$(echo $2 | cut -d '/' -f 1)
__release_name="${__repo_name}-$(uname)"
__release_sha1=$(git rev-parse HEAD:"${__submodule_path}")
__release_tag="${__release_sha1:0:16}"
__release_tag_url="https://api.github.com/repos/filecoin-project/${__repo_name}/releases/tags/${__release_tag}"
echo "acquiring release @ ${__release_tag}"
__release_response=$(curl \
--retry 3 \
--location $__release_tag_url)
__release_url=$(echo $__release_response | jq -r ".assets[] | select(.name | contains(\"${__release_name}\")) | .url")
if [[ -z "$__release_url" ]]; then
(>&2 echo "failed to download release (tag URL: ${__release_tag_url}, response: ${__release_response})")
return 1
fi
__tar_path="/tmp/${__release_name}_$(basename ${__release_url}).tar.gz"
__asset_url=$(curl \
--head \
--retry 3 \
--header "Accept:application/octet-stream" \
--location \
--output /dev/null \
-w %{url_effective} \
"$__release_url")
curl --retry 3 --output "${__tar_path}" "$__asset_url"
if [[ $? -ne "0" ]]; then
(>&2 echo "failed to download release asset (tag URL: ${__release_tag_url}, asset URL: ${__asset_url})")
return 1
fi
eval $__resultvar="'$__tar_path'"
}
build_from_source() {
__submodule_path=$1
__submodule_sha1=$(git rev-parse @:"${__submodule_path}")
__submodule_sha1_truncated="${__submodule_sha1:0:16}"
echo "building from source @ ${__submodule_sha1_truncated}"
if ! [ -x "$(command -v cargo)" ]; then
(>&2 echo 'Error: cargo is not installed.')
(>&2 echo 'Install Rust toolchain to resolve this problem.')
exit 1
fi
if ! [ -x "$(command -v rustup)" ]; then
(>&2 echo 'Error: rustup is not installed.')
(>&2 echo 'Install Rust toolchain installer to resolve this problem.')
exit 1
fi
pushd $__submodule_path
cargo --version
if [[ -f "./scripts/build-release.sh" ]]; then
./scripts/build-release.sh $(cat rust-toolchain)
else
cargo build --release --all
fi
popd
}
package sectorbuilder
import (
"github.com/filecoin-project/go-address"
"github.com/ipfs/go-datastore"
)
func TempSectorbuilderDir(dir string, sectorSize uint64, ds datastore.Batching) (*SectorBuilder, error) {
addr, err := address.NewFromString("t3vfxagwiegrywptkbmyohqqbfzd7xzbryjydmxso4hfhgsnv6apddyihltsbiikjf3lm7x2myiaxhuc77capq")
if err != nil {
return nil, err
}
sb, err := New(&Config{
SectorSize: sectorSize,
Dir: dir,
WorkerThreads: 2,
Miner: addr,
}, ds)
if err != nil {
return nil, err
}
return sb, nil
}
package paramfetch
import (
"encoding/hex"
"encoding/json"
"io"
"net/http"
"net/url"
"os"
"path/filepath"
"strconv"
"strings"
"sync"
rice "github.com/GeertJohan/go.rice"
logging "github.com/ipfs/go-log"
"github.com/minio/blake2b-simd"
"go.uber.org/multierr"
"golang.org/x/xerrors"
pb "gopkg.in/cheggaaa/pb.v1"
)
var log = logging.Logger("paramfetch")
//const gateway = "http://198.211.99.118/ipfs/"
const gateway = "https://ipfs.io/ipfs/"
const paramdir = "/var/tmp/filecoin-proof-parameters"
const dirEnv = "FIL_PROOFS_PARAMETER_CACHE"
type paramFile struct {
Cid string `json:"cid"`
Digest string `json:"digest"`
SectorSize uint64 `json:"sector_size"`
}
type fetch struct {
wg sync.WaitGroup
fetchLk sync.Mutex
errs []error
}
func getParamDir() string {
if os.Getenv(dirEnv) == "" {
return paramdir
}
return os.Getenv(dirEnv)
}
func GetParams(storageSize uint64) error {
if err := os.Mkdir(getParamDir(), 0755); err != nil && !os.IsExist(err) {
return err
}
var params map[string]paramFile
paramBytes := rice.MustFindBox("proof-params").MustBytes("parameters.json")
if err := json.Unmarshal(paramBytes, &params); err != nil {
return err
}
ft := &fetch{}
for name, info := range params {
if storageSize != info.SectorSize && strings.HasSuffix(name, ".params") {
continue
}
ft.maybeFetchAsync(name, info)
}
return ft.wait()
}
func (ft *fetch) maybeFetchAsync(name string, info paramFile) {
ft.wg.Add(1)
go func() {
defer ft.wg.Done()
path := filepath.Join(getParamDir(), name)
err := ft.checkFile(path, info)
if !os.IsNotExist(err) && err != nil {
log.Warn(err)
}
if err == nil {
return
}
ft.fetchLk.Lock()
defer ft.fetchLk.Unlock()
if err := doFetch(path, info); err != nil {
ft.errs = append(ft.errs, xerrors.Errorf("fetching file %s failed: %w", path, err))
return
}
err = ft.checkFile(path, info)
if err != nil {
ft.errs = append(ft.errs, xerrors.Errorf("checking file %s failed: %w", path, err))
err := os.Remove(path)
if err != nil {
ft.errs = append(ft.errs, xerrors.Errorf("remove file %s failed: %w", path, err))
}
}
}()
}
func (ft *fetch) checkFile(path string, info paramFile) error {
if os.Getenv("TRUST_PARAMS") == "1" {
log.Warn("Assuming parameter files are ok. DO NOT USE IN PRODUCTION")
return nil
}
f, err := os.Open(path)
if err != nil {
return err
}
defer f.Close()
h := blake2b.New512()
if _, err := io.Copy(h, f); err != nil {
return err
}
sum := h.Sum(nil)
strSum := hex.EncodeToString(sum[:16])
if strSum == info.Digest {
log.Infof("Parameter file %s is ok", path)
return nil
}
return xerrors.Errorf("checksum mismatch in param file %s, %s != %s", path, strSum, info.Digest)
}
func (ft *fetch) wait() error {
ft.wg.Wait()
return multierr.Combine(ft.errs...)
}
func doFetch(out string, info paramFile) error {
gw := os.Getenv("IPFS_GATEWAY")
if gw == "" {
gw = gateway
}
log.Infof("Fetching %s from %s", out, gw)
outf, err := os.OpenFile(out, os.O_RDWR|os.O_CREATE|os.O_APPEND, 0666)
if err != nil {
return err
}
defer outf.Close()
fStat, err := outf.Stat()
if err != nil {
return err
}
header := http.Header{}
header.Set("Range", "bytes="+strconv.FormatInt(fStat.Size(), 10)+"-")
url, err := url.Parse(gw + info.Cid)
if err != nil {
return err
}
log.Infof("GET %s", url)
req := http.Request{
Method: "GET",
URL: url,
Header: header,
Close: true,
}
resp, err := http.DefaultClient.Do(&req)
if err != nil {
return err
}
defer resp.Body.Close()
bar := pb.New64(resp.ContentLength)
bar.Units = pb.U_BYTES
bar.ShowSpeed = true
bar.Start()
_, err = io.Copy(outf, bar.NewProxyReader(resp.Body))
bar.Finish()
return err
}
{
"v20-proof-of-spacetime-election-5f585aca354eb68e411c8582ed0efd800792430e4e76d73468c4fc03f1a8d6d2.params": {
"cid": "QmX7tYeNPWae2fjZ3Am6GB9dmHvLqvoz8dKo3PR98VYxH9",
"digest": "39a9edec3355516674f0d12b926be493",
"sector_size": 34359738368
},
"v20-proof-of-spacetime-election-5f585aca354eb68e411c8582ed0efd800792430e4e76d73468c4fc03f1a8d6d2.vk": {
"cid": "QmbNGx7pNbGiEr8ykoHxVXHW2LNSmGdsxKtj1onZCyguCX",
"digest": "0227ae7df4f2affe529ebafbbc7540ee",
"sector_size": 34359738368
},
"v20-proof-of-spacetime-election-a4e18190d4b4657ba1b4d08a341871b2a6f398e327cb9951b28ab141fbdbf49d.params": {
"cid": "QmRGZsNp4mp1cZshcXqt3VMuWscAEsiMa2iepF4CsWWoiv",
"digest": "991041a354b12c280542741f58c7f2ca",
"sector_size": 1024
},
"v20-proof-of-spacetime-election-a4e18190d4b4657ba1b4d08a341871b2a6f398e327cb9951b28ab141fbdbf49d.vk": {
"cid": "QmWpmrhCGVcfqLyqp5oGAnhPmCE5hGTPaauHi25mpQwRSU",
"digest": "91fac550e1f9bccab213830bb0c85bd6",
"sector_size": 1024
},
"v20-proof-of-spacetime-election-a9eb6d90b896a282ec2d3a875c6143e3fcff778f0da1460709e051833651559b.params": {
"cid": "QmenSZXh1EsSyHiSRvA6wb8yaPhYBTjrKehJw96Px5HnN4",
"digest": "6322eacd2773163ddd51f9ca7d645fc4",
"sector_size": 1073741824
},
"v20-proof-of-spacetime-election-a9eb6d90b896a282ec2d3a875c6143e3fcff778f0da1460709e051833651559b.vk": {
"cid": "QmPvZoMKofw6eDhDg5ESJA2QAZP8HvM6qMQk7fw4pq9bQf",
"digest": "0df62745fceac922e3e70847cfc70b52",
"sector_size": 1073741824
},
"v20-proof-of-spacetime-election-bf872523641b1de33553db2a177df13e412d7b3b0103e6696ae0a1cf5d525259.params": {
"cid": "QmVibFqzkZoL8cwQmzj8njPokCQGCCx4pBcUH77bzgJgV9",
"digest": "de9d71e672f286706a1673bd57abdaac",
"sector_size": 16777216
},
"v20-proof-of-spacetime-election-bf872523641b1de33553db2a177df13e412d7b3b0103e6696ae0a1cf5d525259.vk": {
"cid": "QmZa5FX27XyiEXQQLQpHqtMJKLzrcY8wMuj3pxzmSimSyu",
"digest": "7f796d3a0f13499181e44b5eee0cc744",
"sector_size": 16777216
},
"v20-proof-of-spacetime-election-ffc3fb192364238b60977839d14e3154d4a98313e30d46694a12af54b6874975.params": {
"cid": "Qmbt2SWWAmMcYoY3DAiRDXA8fAuqdqRLWucJMSxYmzBCmN",
"digest": "151ae0ae183fc141e8c2bebc28e5cc10",
"sector_size": 268435456
},
"v20-proof-of-spacetime-election-ffc3fb192364238b60977839d14e3154d4a98313e30d46694a12af54b6874975.vk": {
"cid": "QmUxvPu4xdVmjMFihUKoYyEdXBqxsXkvmxRweU7KouWHji",
"digest": "95eb89588e9d1832aca044c3a13178af",
"sector_size": 268435456
},
"v20-stacked-proof-of-replication-117839dacd1ef31e5968a6fd13bcd6fa86638d85c40c9241a1d07c2a954eb89b.params": {
"cid": "QmQZe8eLo2xXbhSDxtyYZNqEjqjdcWGdADywECRvNEZQdX",
"digest": "fcd50e2e08a8560a6bb3418e883567ed",
"sector_size": 268435456
},
"v20-stacked-proof-of-replication-117839dacd1ef31e5968a6fd13bcd6fa86638d85c40c9241a1d07c2a954eb89b.vk": {
"cid": "Qme1hn6QT1covfoUFGDZkqoE1pMTax9FNW3nWWmTNqFe7y",
"digest": "872e244d86499fd659082e3bcf3f13e7",
"sector_size": 268435456
},
"v20-stacked-proof-of-replication-b46f3a1051afbb67f70aae7082da95def62eee943662f3e1bf69837fb08aaae4.params": {
"cid": "QmSfrPDC9jwY4MKrjzhCqDBBAG44wSDM8oE5NuDwWSh2xN",
"digest": "0a338b941c5f17946340de5fc95cab30",
"sector_size": 34359738368
},
"v20-stacked-proof-of-replication-b46f3a1051afbb67f70aae7082da95def62eee943662f3e1bf69837fb08aaae4.vk": {
"cid": "QmTDGynCmnbaZNBP3Bv3F3duC3ecKRubCKeMUiQQZYbGpF",
"digest": "c752e070a6b7aa8b79aa661a6b600b55",
"sector_size": 34359738368
},
"v20-stacked-proof-of-replication-e71093863cadc71de61f38311ee45816633973bbf34849316b147f8d2e66f199.params": {
"cid": "QmXjSSnMUnc7EjQBYtTHhvLU3kXJTbUyhVhJRSTRehh186",
"digest": "efa407fd09202dffd15799a8518e73d3",
"sector_size": 1024
},
"v20-stacked-proof-of-replication-e71093863cadc71de61f38311ee45816633973bbf34849316b147f8d2e66f199.vk": {
"cid": "QmYHW3zhQouDP4okFbXSsRMcZ8bokKGvzxqbv7ZrunPMiG",
"digest": "b2f09a0ccb62da28c890d5b881c8dcd2",
"sector_size": 1024
},
"v20-stacked-proof-of-replication-e99a585174b6a45b254ba4780d72c89ad808c305c6d11711009ade4f39dba8e9.params": {
"cid": "QmUhyfNeLb32LfSkjsUwTFYLXQGMj6JQ8daff4DdVMt79q",
"digest": "b53c1916a63839ec345aa2224e9198b7",
"sector_size": 1073741824
},
"v20-stacked-proof-of-replication-e99a585174b6a45b254ba4780d72c89ad808c305c6d11711009ade4f39dba8e9.vk": {
"cid": "QmWReGfbuoozNErbskmFvqV4q36BY6F2WWb4cVFc3zoYkA",
"digest": "20d58a3fae7343481f8298a2dd493dd7",
"sector_size": 1073741824
},
"v20-stacked-proof-of-replication-f571ee2386f4c65a68e802747f2d78691006fc81a67971c4d9641403fffece16.params": {
"cid": "QmSAHu14Pe8iav6BYCt9XkpHJ73XM7tcpY4d9JK9BST9HU",
"digest": "7698426202c7e07b26ef056d31485b3a",
"sector_size": 16777216
},
"v20-stacked-proof-of-replication-f571ee2386f4c65a68e802747f2d78691006fc81a67971c4d9641403fffece16.vk": {
"cid": "QmaKtFLShnhMGVn7P9UsHjkgqtqRFSwCStqqykBN7u8dax",
"digest": "834408e5c3fce6ec5d1bf64e64cee94e",
"sector_size": 16777216
}
}
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment