Compare commits
41 Commits
cmd/packse
...
master
Author | SHA1 | Date |
---|---|---|
|
19b2560e2d | |
![]() |
3974db129e | |
![]() |
7dffaaa5d7 | |
![]() |
2a1eafa306 | |
![]() |
3585b7943a | |
![]() |
6cbbe7328a | |
![]() |
a83aedd502 | |
![]() |
8474cfbc5d | |
![]() |
2f842a21f3 | |
![]() |
439bf2422b | |
|
2b280de481 | |
|
5398dddb02 | |
|
e0ae6bb4b6 | |
|
6b836895a0 | |
|
301dc0c7c8 | |
|
a6c2991781 | |
|
d827d8aace | |
|
565a269cef | |
|
16d836da9a | |
|
8cae4d0f8f | |
|
6ea49bb3b3 | |
|
83a5226e1a | |
|
1b84160dcf | |
|
52213cf67e | |
|
f70914aa38 | |
|
ca54fb8fbb | |
|
d5b4fcf0be | |
|
f08165b0f1 | |
|
6f532296ef | |
|
984639d475 | |
|
1a37903d98 | |
|
7abc96d537 | |
|
280bf2b181 | |
|
566c23dd7e | |
|
f7cd22f633 | |
|
8fc082c4ca | |
|
b7b5dd55d0 | |
|
b6da0b4e48 | |
|
0883b0b405 | |
|
8528a31d54 | |
|
c2d7bdaa71 |
|
@ -0,0 +1,19 @@
|
||||||
|
Copyright (c) 2020 Laurence Withers
|
||||||
|
|
||||||
|
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
of this software and associated documentation files (the "Software"), to deal
|
||||||
|
in the Software without restriction, including without limitation the rights
|
||||||
|
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
copies of the Software, and to permit persons to whom the Software is
|
||||||
|
furnished to do so, subject to the following conditions:
|
||||||
|
|
||||||
|
The above copyright notice and this permission notice shall be included in all
|
||||||
|
copies or substantial portions of the Software.
|
||||||
|
|
||||||
|
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
SOFTWARE.
|
55
README.md
55
README.md
|
@ -1,5 +1,7 @@
|
||||||
# HTTP resource pack server
|
# HTTP resource pack server
|
||||||
|
|
||||||
|
[](https://pkg.go.dev/src.lwithers.me.uk/go/htpack)
|
||||||
|
|
||||||
A common scenario is that you have a set of static resources that you want to
|
A common scenario is that you have a set of static resources that you want to
|
||||||
serve up quickly via HTTP (for example: stylesheets, WASM).
|
serve up quickly via HTTP (for example: stylesheets, WASM).
|
||||||
|
|
||||||
|
@ -7,10 +9,10 @@ This package provides a `net/http`-compatible `http.Handler` to do so, with
|
||||||
support for:
|
support for:
|
||||||
- compression
|
- compression
|
||||||
- gzip
|
- gzip
|
||||||
- brotli, if you have the external compression binary available at pack time
|
- brotli
|
||||||
- does not yet support Transfer-Encoding, only Accept-Encoding/Content-Encoding
|
- does not yet support Transfer-Encoding, only Accept-Encoding/Content-Encoding
|
||||||
- etags
|
- etags
|
||||||
- ranges (TODO)
|
- ranges
|
||||||
|
|
||||||
The workflow is as follows:
|
The workflow is as follows:
|
||||||
- (optional) build YAML file describing files to serve
|
- (optional) build YAML file describing files to serve
|
||||||
|
@ -18,3 +20,52 @@ The workflow is as follows:
|
||||||
- create `htpack.Handler` pointing at .htpack file
|
- create `htpack.Handler` pointing at .htpack file
|
||||||
|
|
||||||
The handler can easily be combined with middleware (`http.StripPrefix` etc.).
|
The handler can easily be combined with middleware (`http.StripPrefix` etc.).
|
||||||
|
|
||||||
|
## Range handling notes
|
||||||
|
|
||||||
|
Too many bugs have been found with range handling and composite ranges, so the
|
||||||
|
handler only accepts a single range within the limits of the file. Anything
|
||||||
|
else will be ignored.
|
||||||
|
|
||||||
|
The interaction between range handling and compression also seems a little
|
||||||
|
ill-defined; as we have pre-compressed data, however, we can consistently
|
||||||
|
serve the exact same byte data for compressed files.
|
||||||
|
|
||||||
|
## Angular-style single-page application handling
|
||||||
|
|
||||||
|
If you wish to support an angular.js-style single page application, in which
|
||||||
|
a Javascript application uses the browser's history API to create a set of
|
||||||
|
virtual paths ("routes"), it is necessary to somehow intercept HTTP 404 errors
|
||||||
|
being returned from the handler and instead return an HTTP 200 with an HTML
|
||||||
|
document.
|
||||||
|
|
||||||
|
This can be achieved with a number of methods.
|
||||||
|
|
||||||
|
The simplest method is to tell `packserver` itself which resource to use
|
||||||
|
instead of returning an HTTP 404. Use the command line argument
|
||||||
|
`--fallback-404 /index.html` (or whichever named resource). The filename must
|
||||||
|
match a packed resource, so it will be preceded with a `/`. It must exist in
|
||||||
|
all packfiles being served.
|
||||||
|
|
||||||
|
If you have an nginx instance reverse proxying in front of `htpack`, then you
|
||||||
|
can use a couple of extra directives. This is very flexible as it lets you
|
||||||
|
override the resource for different routes. For example:
|
||||||
|
|
||||||
|
# prevent page loaded at "http://server.example/my-application" from
|
||||||
|
# requesting resources at "/*" when it should request them at
|
||||||
|
# "/my-application/*" instead
|
||||||
|
location = /my-application {
|
||||||
|
return 308 /my-application/;
|
||||||
|
}
|
||||||
|
|
||||||
|
location /my-application/ {
|
||||||
|
proxy_to http://htpack-addr:8080/;
|
||||||
|
proxy_intercept_errors on;
|
||||||
|
error_page 404 =200 /my-application/;
|
||||||
|
}
|
||||||
|
|
||||||
|
If you are using the handler as a library, then you may call
|
||||||
|
`handler.SetNotFound(filename)` to select a resource to return (with HTTP 200)
|
||||||
|
if a request is made for a resource that is not found. The filename must match
|
||||||
|
a packed resource, so it will be preceded with a `/` (for example it may be
|
||||||
|
`"/index.html"`).
|
||||||
|
|
|
@ -0,0 +1,105 @@
|
||||||
|
package main
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"path/filepath"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"src.lwithers.me.uk/go/htpack/cmd/htpacker/packer"
|
||||||
|
)
|
||||||
|
|
||||||
|
type ctGlobEntry struct {
|
||||||
|
pattern, contentType string
|
||||||
|
pathComponents int
|
||||||
|
}
|
||||||
|
|
||||||
|
type ctGlobList []ctGlobEntry
|
||||||
|
|
||||||
|
func parseGlobs(flags []string) (ctGlobList, error) {
|
||||||
|
var ctGlobs ctGlobList
|
||||||
|
for _, flag := range flags {
|
||||||
|
// split pattern:content-type
|
||||||
|
pos := strings.LastIndexByte(flag, ':')
|
||||||
|
if pos == -1 {
|
||||||
|
return nil, &parseGlobError{
|
||||||
|
Value: flag,
|
||||||
|
Err: "must be pattern:content-type",
|
||||||
|
}
|
||||||
|
}
|
||||||
|
pattern, ct := flag[:pos], flag[pos+1:]
|
||||||
|
|
||||||
|
// patterns starting with "/" must match the entire directory
|
||||||
|
// prefix; otherwise, an arbitrary number of path components are
|
||||||
|
// allowed prior to the prefix
|
||||||
|
var pathComponents int
|
||||||
|
if strings.HasPrefix(pattern, "/") {
|
||||||
|
pathComponents = -1
|
||||||
|
pattern = strings.TrimPrefix(pattern, "/")
|
||||||
|
} else {
|
||||||
|
pathComponents = 1 + strings.Count(pattern, "/")
|
||||||
|
}
|
||||||
|
|
||||||
|
// test that the pattern's syntax is valid
|
||||||
|
if _, err := filepath.Match(pattern, "test"); err != nil {
|
||||||
|
return nil, &parseGlobError{
|
||||||
|
Value: flag,
|
||||||
|
Err: err.Error(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
ctGlobs = append(ctGlobs, ctGlobEntry{
|
||||||
|
pattern: pattern,
|
||||||
|
contentType: ct,
|
||||||
|
pathComponents: pathComponents,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
return ctGlobs, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// ApplyContentTypes will scan the list of files to pack, matching by filename,
|
||||||
|
// and on match will apply the given content type.
|
||||||
|
func (ctGlobs ctGlobList) ApplyContentTypes(ftp packer.FilesToPack) {
|
||||||
|
for name := range ftp {
|
||||||
|
for _, entry := range ctGlobs {
|
||||||
|
testName := trimPathComponents(name, entry.pathComponents)
|
||||||
|
matched, _ := filepath.Match(entry.pattern, testName)
|
||||||
|
if matched {
|
||||||
|
f := ftp[name]
|
||||||
|
f.ContentType = entry.contentType
|
||||||
|
ftp[name] = f
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func trimPathComponents(name string, components int) string {
|
||||||
|
name = strings.TrimPrefix(name, "/") // FilesToPack keys = absolute path
|
||||||
|
|
||||||
|
// if we are matching the full prefix, don't otherwise manipulate the
|
||||||
|
// name
|
||||||
|
if components < 0 {
|
||||||
|
return name
|
||||||
|
}
|
||||||
|
|
||||||
|
// otherwise, trim the number of components remaining in the path so
|
||||||
|
// that we are only matching the trailing path components from the
|
||||||
|
// FilesToPack key
|
||||||
|
parts := 1 + strings.Count(name, "/")
|
||||||
|
for ; parts > components; parts-- {
|
||||||
|
pos := strings.IndexByte(name, '/')
|
||||||
|
name = name[pos+1:]
|
||||||
|
}
|
||||||
|
return name
|
||||||
|
}
|
||||||
|
|
||||||
|
// parseGlobError is returned from parseGlobs on error.
|
||||||
|
type parseGlobError struct {
|
||||||
|
Value string
|
||||||
|
Err string
|
||||||
|
}
|
||||||
|
|
||||||
|
func (pge *parseGlobError) Error() string {
|
||||||
|
return fmt.Sprintf("--content-type entry %q: %s", pge.Value, pge.Err)
|
||||||
|
}
|
|
@ -0,0 +1,109 @@
|
||||||
|
package main
|
||||||
|
|
||||||
|
import (
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"src.lwithers.me.uk/go/htpack/cmd/htpacker/packer"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestParseGlobs(t *testing.T) {
|
||||||
|
ctGlobs, err := parseGlobs([]string{
|
||||||
|
"*.foo:text/html",
|
||||||
|
"*.bar:text/plain",
|
||||||
|
"baz/qux/*.js:application/javascript",
|
||||||
|
"/abs/file:image/png",
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
check := func(pos int, pattern, contentType string, pathComponents int) {
|
||||||
|
if pos >= len(ctGlobs) {
|
||||||
|
t.Errorf("entry %d not present", pos)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
if pattern != ctGlobs[pos].pattern {
|
||||||
|
t.Errorf("entry %d: expected pattern %q but got %q",
|
||||||
|
pos, pattern, ctGlobs[pos].pattern)
|
||||||
|
}
|
||||||
|
|
||||||
|
if contentType != ctGlobs[pos].contentType {
|
||||||
|
t.Errorf("entry %d: expected content type %q but got %q",
|
||||||
|
pos, contentType, ctGlobs[pos].contentType)
|
||||||
|
}
|
||||||
|
|
||||||
|
if pathComponents != ctGlobs[pos].pathComponents {
|
||||||
|
t.Errorf("entry %d: expected num. path components %d but got %d",
|
||||||
|
pos, pathComponents, ctGlobs[pos].pathComponents)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
check(0, "*.foo", "text/html", 1)
|
||||||
|
check(1, "*.bar", "text/plain", 1)
|
||||||
|
check(2, "baz/qux/*.js", "application/javascript", 3)
|
||||||
|
check(3, "abs/file", "image/png", -1)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestParseGlobsErrSep(t *testing.T) {
|
||||||
|
const badValue = "hello/dave.js" // missing ":" separator
|
||||||
|
_, err := parseGlobs([]string{badValue})
|
||||||
|
switch err := err.(type) {
|
||||||
|
case *parseGlobError:
|
||||||
|
if err.Value != badValue {
|
||||||
|
t.Errorf("expected value %q but got %q", badValue, err.Value)
|
||||||
|
}
|
||||||
|
case nil:
|
||||||
|
t.Fatal("expected error")
|
||||||
|
default:
|
||||||
|
t.Errorf("unexpected error type %T (value %v)", err, err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestParseGlobsErrPattern(t *testing.T) {
|
||||||
|
const badValue = "[-z]:foo/bar" // malformed character class
|
||||||
|
_, err := parseGlobs([]string{badValue})
|
||||||
|
switch err := err.(type) {
|
||||||
|
case *parseGlobError:
|
||||||
|
if err.Value != badValue {
|
||||||
|
t.Errorf("expected value %q but got %q", badValue, err.Value)
|
||||||
|
}
|
||||||
|
case nil:
|
||||||
|
t.Fatal("expected error")
|
||||||
|
default:
|
||||||
|
t.Errorf("unexpected error type %T (value %v)", err, err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestApplyContentTypes(t *testing.T) {
|
||||||
|
// XXX: we program our _expectation_ of content-type into the Filename field
|
||||||
|
ftp := packer.FilesToPack{
|
||||||
|
"foo.txt": packer.FileToPack{Filename: "text/plain"},
|
||||||
|
"baz/foo.txt": packer.FileToPack{Filename: "text/plain"},
|
||||||
|
|
||||||
|
"baz/qux.png": packer.FileToPack{Filename: "image/png"},
|
||||||
|
"foo/qux.png": packer.FileToPack{},
|
||||||
|
"foo/baz/qux.png": packer.FileToPack{Filename: "image/png"},
|
||||||
|
|
||||||
|
"bar.jpeg": packer.FileToPack{},
|
||||||
|
"foo/baz/bar.jpeg": packer.FileToPack{},
|
||||||
|
"baz/bar.jpeg": packer.FileToPack{Filename: "image/jpeg"},
|
||||||
|
}
|
||||||
|
|
||||||
|
ctGlobs, err := parseGlobs([]string{
|
||||||
|
"*.txt:text/plain", // should match anywhere
|
||||||
|
"baz/qux.png:image/png", // won't match /foo/qux.png
|
||||||
|
"/baz/bar.jpeg:image/jpeg", // exact prefix match
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
t.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
ctGlobs.ApplyContentTypes(ftp)
|
||||||
|
for k, v := range ftp {
|
||||||
|
if v.Filename != v.ContentType {
|
||||||
|
t.Errorf("filename %q: expected content type %q but got %q",
|
||||||
|
k, v.Filename, v.ContentType)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
|
@ -1,16 +1,24 @@
|
||||||
module github.com/lwithers/htpack/cmd/htpacker
|
module src.lwithers.me.uk/go/htpack/cmd/htpacker
|
||||||
|
|
||||||
go 1.12
|
go 1.22
|
||||||
|
|
||||||
require (
|
require (
|
||||||
|
github.com/andybalholm/brotli v1.1.0
|
||||||
github.com/foobaz/go-zopfli v0.0.0-20140122214029-7432051485e2
|
github.com/foobaz/go-zopfli v0.0.0-20140122214029-7432051485e2
|
||||||
github.com/inconshreveable/mousetrap v1.0.0 // indirect
|
github.com/gosuri/uiprogress v0.0.1
|
||||||
github.com/kr/pretty v0.1.0 // indirect
|
github.com/spf13/cobra v1.8.1
|
||||||
github.com/lwithers/htpack v1.0.0
|
golang.org/x/sys v0.22.0
|
||||||
github.com/lwithers/pkg v1.2.1
|
gopkg.in/yaml.v2 v2.4.0
|
||||||
github.com/spf13/cobra v0.0.3
|
src.lwithers.me.uk/go/htpack v1.3.3
|
||||||
github.com/spf13/pflag v1.0.3 // indirect
|
src.lwithers.me.uk/go/writefile v1.0.1
|
||||||
golang.org/x/sys v0.0.0-20190415081028-16da32be82c5
|
)
|
||||||
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 // indirect
|
|
||||||
gopkg.in/yaml.v2 v2.2.2
|
require (
|
||||||
|
github.com/gogo/protobuf v1.3.2 // indirect
|
||||||
|
github.com/gosuri/uilive v0.0.4 // indirect
|
||||||
|
github.com/inconshreveable/mousetrap v1.1.0 // indirect
|
||||||
|
github.com/kr/pretty v0.1.0 // indirect
|
||||||
|
github.com/mattn/go-isatty v0.0.20 // indirect
|
||||||
|
github.com/spf13/pflag v1.0.5 // indirect
|
||||||
|
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 // indirect
|
||||||
)
|
)
|
||||||
|
|
|
@ -1,30 +1,68 @@
|
||||||
|
github.com/andybalholm/brotli v1.1.0 h1:eLKJA0d02Lf0mVpIDgYnqXcUn0GqVmEFny3VuID1U3M=
|
||||||
|
github.com/andybalholm/brotli v1.1.0/go.mod h1:sms7XGricyQI9K10gOSf56VKKWS4oLer58Q+mhRPtnY=
|
||||||
|
github.com/cpuguy83/go-md2man/v2 v2.0.4/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
|
||||||
github.com/foobaz/go-zopfli v0.0.0-20140122214029-7432051485e2 h1:VA6jElpcJ+wkwEBufbnVkSBCA2TEnxdRppjRT5Kvh0A=
|
github.com/foobaz/go-zopfli v0.0.0-20140122214029-7432051485e2 h1:VA6jElpcJ+wkwEBufbnVkSBCA2TEnxdRppjRT5Kvh0A=
|
||||||
github.com/foobaz/go-zopfli v0.0.0-20140122214029-7432051485e2/go.mod h1:Yi95+RbwKz7uGndSuUhoq7LJKh8qH8DT9fnL4ewU30k=
|
github.com/foobaz/go-zopfli v0.0.0-20140122214029-7432051485e2/go.mod h1:Yi95+RbwKz7uGndSuUhoq7LJKh8qH8DT9fnL4ewU30k=
|
||||||
github.com/gogo/protobuf v1.2.1 h1:/s5zKNz0uPFCZ5hddgPdo2TK2TVrUNMn0OOX8/aZMTE=
|
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
|
||||||
github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4=
|
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
|
||||||
github.com/inconshreveable/mousetrap v1.0.0 h1:Z8tu5sraLXCXIcARxBp/8cbvlwVa7Z1NHg9XEKhtSvM=
|
github.com/gosuri/uilive v0.0.4 h1:hUEBpQDj8D8jXgtCdBu7sWsy5sbW/5GhuO8KBwJ2jyY=
|
||||||
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
|
github.com/gosuri/uilive v0.0.4/go.mod h1:V/epo5LjjlDE5RJUcqx8dbw+zc93y5Ya3yg8tfZ74VI=
|
||||||
github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q=
|
github.com/gosuri/uiprogress v0.0.1 h1:0kpv/XY/qTmFWl/SkaJykZXrBBzwwadmW8fRb7RJSxw=
|
||||||
|
github.com/gosuri/uiprogress v0.0.1/go.mod h1:C1RTYn4Sc7iEyf6j8ft5dyoZ4212h8G1ol9QQluh5+0=
|
||||||
|
github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
|
||||||
|
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
|
||||||
|
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
|
||||||
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
|
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
|
||||||
github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI=
|
github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI=
|
||||||
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
|
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
|
||||||
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
|
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
|
||||||
github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
|
github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
|
||||||
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
|
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
|
||||||
github.com/lwithers/htpack v1.0.0 h1:opBavUAl6QKjvlxNaOwMAvO+Q+ytZpKSl0iDCYam1Uk=
|
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
|
||||||
github.com/lwithers/htpack v1.0.0/go.mod h1:4dNHChTcK0SzOTVnFt4b0SuK7OMSo8Ge7o1XXYV4xUk=
|
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
|
||||||
github.com/lwithers/pkg v1.2.1 h1:KNnZFGv0iyduc+uUF5UB8vDyr2ofRq930cVKqrpQulY=
|
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
|
||||||
github.com/lwithers/pkg v1.2.1/go.mod h1:0CRdDnVCqIa5uaIs1u8Gmwl3M7sm181QmSmVVaPTZUo=
|
github.com/spf13/cobra v1.8.1 h1:e5/vxKd/rZsfSJMUX1agtjeTDf+qv1/JdBF8gg5k9ZM=
|
||||||
github.com/spf13/cobra v0.0.3 h1:ZlrZ4XsMRm04Fr5pSFxBgfND2EBVa1nLpiy1stUsX/8=
|
github.com/spf13/cobra v1.8.1/go.mod h1:wHxEcudfqmLYa8iTfL+OuZPbBZkmvliBWKIezN3kD9Y=
|
||||||
github.com/spf13/cobra v0.0.3/go.mod h1:1l0Ry5zgKvJasoi3XT1TypsSe7PqH0Sj9dhYf7v3XqQ=
|
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
|
||||||
github.com/spf13/pflag v1.0.3 h1:zPAT6CGy6wXeQ7NtTnaTerfKOsV6V6F8agHXFiazDkg=
|
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
|
||||||
github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
|
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
|
||||||
golang.org/x/sys v0.0.0-20180924175946-90868a75fefd/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
|
||||||
golang.org/x/sys v0.0.0-20190415081028-16da32be82c5 h1:UMbOtg4ZL2GyTAolLE9QfNvzskWvFkI935Z98i9moXA=
|
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
|
||||||
golang.org/x/sys v0.0.0-20190415081028-16da32be82c5/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||||
golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||||
|
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
|
||||||
|
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
|
||||||
|
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||||
|
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||||
|
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||||
|
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
|
||||||
|
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||||
|
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||||
|
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||||
|
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||||
|
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
|
golang.org/x/sys v0.0.0-20191228213918-04cbcbbfeed8/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
|
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
|
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
|
golang.org/x/sys v0.22.0 h1:RI27ohtqKCnwULzJLqkv897zojh5/DwS/ENaMzUOaWI=
|
||||||
|
golang.org/x/sys v0.22.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
|
||||||
|
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||||
|
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||||
|
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||||
|
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||||
|
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
|
||||||
|
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
|
||||||
|
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||||
|
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||||
|
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||||
|
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||||
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 h1:qIbj1fsPNlZgppZ+VLlY7N33q108Sa+fhmuc+sWQYwY=
|
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 h1:qIbj1fsPNlZgppZ+VLlY7N33q108Sa+fhmuc+sWQYwY=
|
||||||
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||||
gopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw=
|
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
|
||||||
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
|
||||||
|
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||||
|
src.lwithers.me.uk/go/htpack v1.3.3 h1:Xvl6qR9HfSblmCgPyu+ACQ9o3aLQSIy3l8CrMbzj/jc=
|
||||||
|
src.lwithers.me.uk/go/htpack v1.3.3/go.mod h1:qKgCBgZ6iiiuYOxZkYOPVpXLBzp6gXEd4A0ksxgR6Nk=
|
||||||
|
src.lwithers.me.uk/go/writefile v1.0.1 h1:bwBGtvyZfCxFIM14e1aYgJWlZuowKkwJx53OJlUPd0s=
|
||||||
|
src.lwithers.me.uk/go/writefile v1.0.1/go.mod h1:NahlmRCtB7kg4ai+zHZgxXdUs+MR8VqWG8mql35TsxA=
|
||||||
|
|
|
@ -5,8 +5,8 @@ import (
|
||||||
"fmt"
|
"fmt"
|
||||||
"os"
|
"os"
|
||||||
|
|
||||||
"github.com/lwithers/htpack/packed"
|
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
|
"src.lwithers.me.uk/go/htpack/packed"
|
||||||
)
|
)
|
||||||
|
|
||||||
var inspectCmd = &cobra.Command{
|
var inspectCmd = &cobra.Command{
|
||||||
|
@ -32,7 +32,6 @@ var inspectCmd = &cobra.Command{
|
||||||
|
|
||||||
// Inspect a packfile.
|
// Inspect a packfile.
|
||||||
// TODO: verify etag; verify integrity of compressed data.
|
// TODO: verify etag; verify integrity of compressed data.
|
||||||
// TODO: skip Gzip/Brotli if not present; print ratio.
|
|
||||||
func Inspect(filename string) error {
|
func Inspect(filename string) error {
|
||||||
f, err := os.Open(filename)
|
f, err := os.Open(filename)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
@ -65,10 +64,46 @@ func Inspect(filename string) error {
|
||||||
printSize(info.Brotli.Length), info.Brotli.Offset)
|
printSize(info.Brotli.Length), info.Brotli.Offset)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
inspectSummary(dir)
|
||||||
}
|
}
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func inspectSummary(dir *packed.Directory) {
|
||||||
|
var (
|
||||||
|
n, ngzip, nbrotli int
|
||||||
|
s, sgzip, sbrotli uint64
|
||||||
|
)
|
||||||
|
|
||||||
|
for _, f := range dir.Files {
|
||||||
|
n++
|
||||||
|
s += f.Uncompressed.Length
|
||||||
|
if f.Gzip != nil {
|
||||||
|
ngzip++
|
||||||
|
sgzip += f.Gzip.Length
|
||||||
|
}
|
||||||
|
if f.Brotli != nil {
|
||||||
|
nbrotli++
|
||||||
|
sbrotli += f.Brotli.Length
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Printf("Uncompressed:\n\tFiles: %d\n\tSize: %s\n",
|
||||||
|
n, printSize(s))
|
||||||
|
if ngzip > 0 {
|
||||||
|
fmt.Printf("gzip compressed:\n\tFiles: %d (%.1f%% of total)\n"+
|
||||||
|
"\tSize: %s\n\tRatio: %.1f%%\n",
|
||||||
|
ngzip, 100*float64(ngzip)/float64(n),
|
||||||
|
printSize(sgzip), 100*float64(sgzip)/float64(s))
|
||||||
|
}
|
||||||
|
if nbrotli > 0 {
|
||||||
|
fmt.Printf("brotli compressed:\n\tFiles: %d (%.1f%% of total)\n"+
|
||||||
|
"\tSize: %s\n\tRatio: %.1f%%\n",
|
||||||
|
nbrotli, 100*float64(nbrotli)/float64(n),
|
||||||
|
printSize(sbrotli), 100*float64(sbrotli)/float64(s))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
func printSize(size uint64) string {
|
func printSize(size uint64) string {
|
||||||
switch {
|
switch {
|
||||||
case size < 1<<10:
|
case size < 1<<10:
|
||||||
|
|
|
@ -7,14 +7,29 @@ import (
|
||||||
"os"
|
"os"
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
|
|
||||||
"github.com/lwithers/htpack/cmd/htpacker/packer"
|
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
yaml "gopkg.in/yaml.v2"
|
yaml "gopkg.in/yaml.v2"
|
||||||
|
"src.lwithers.me.uk/go/htpack/cmd/htpacker/packer"
|
||||||
|
"src.lwithers.me.uk/go/htpack/packed"
|
||||||
)
|
)
|
||||||
|
|
||||||
var packCmd = &cobra.Command{
|
var packCmd = &cobra.Command{
|
||||||
Use: "pack",
|
Use: "pack",
|
||||||
Short: "creates a packfile from a YAML spec or set of files/dirs",
|
Short: "creates a packfile from a YAML spec or set of files/dirs",
|
||||||
|
Long: `When given a YAML spec file (a template for which can be generated
|
||||||
|
with the "yaml" command), files will be packed exactly as per the spec. The
|
||||||
|
--content-type flag cannot be used and no extra files can be specified.
|
||||||
|
|
||||||
|
When given a list of files and directories to pack, the content type for each
|
||||||
|
file will be automatically detected. It is possible to override the content
|
||||||
|
type by specifying one or more --content-type flags. These take an argument in
|
||||||
|
the form "pattern:content/type". The pattern is matched using common glob
|
||||||
|
(* = wildcard), very similar to .gitignore. If the pattern contains any
|
||||||
|
directory names, these must match the final components of the file to pack's
|
||||||
|
path. If the pattern starts with a "/", then the full path must be matched
|
||||||
|
exactly.
|
||||||
|
`,
|
||||||
|
|
||||||
RunE: func(c *cobra.Command, args []string) error {
|
RunE: func(c *cobra.Command, args []string) error {
|
||||||
// convert "out" to an absolute path, so that it will still
|
// convert "out" to an absolute path, so that it will still
|
||||||
// work after chdir
|
// work after chdir
|
||||||
|
@ -50,6 +65,16 @@ var packCmd = &cobra.Command{
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// parse content-type globs
|
||||||
|
ctGlobList, err := c.Flags().GetStringArray("content-type")
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
ctGlobs, err := parseGlobs(ctGlobList)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
// if "spec" is not present, then we expect a list of input
|
// if "spec" is not present, then we expect a list of input
|
||||||
// files, and we'll build a spec from them
|
// files, and we'll build a spec from them
|
||||||
if spec == "" {
|
if spec == "" {
|
||||||
|
@ -57,12 +82,16 @@ var packCmd = &cobra.Command{
|
||||||
return errors.New("need --yaml, " +
|
return errors.New("need --yaml, " +
|
||||||
"or one or more filenames")
|
"or one or more filenames")
|
||||||
}
|
}
|
||||||
err = PackFiles(c, args, out)
|
err = PackFiles2(c, args, ctGlobs, out)
|
||||||
} else {
|
} else {
|
||||||
if len(args) != 0 {
|
if len(args) != 0 {
|
||||||
return errors.New("cannot specify files " +
|
return errors.New("cannot specify files " +
|
||||||
"when using --yaml")
|
"when using --yaml")
|
||||||
}
|
}
|
||||||
|
if ctGlobs != nil {
|
||||||
|
return errors.New("cannot specify --content-type " +
|
||||||
|
"when using --yaml")
|
||||||
|
}
|
||||||
err = PackSpec(c, spec, out)
|
err = PackSpec(c, spec, out)
|
||||||
}
|
}
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
@ -82,14 +111,22 @@ func init() {
|
||||||
"YAML specification file (if not present, just pack files)")
|
"YAML specification file (if not present, just pack files)")
|
||||||
packCmd.Flags().StringP("chdir", "C", "",
|
packCmd.Flags().StringP("chdir", "C", "",
|
||||||
"Change to directory before searching for input files")
|
"Change to directory before searching for input files")
|
||||||
|
packCmd.Flags().StringArrayP("content-type", "", nil,
|
||||||
|
"Override content type for pattern, e.g. \"*.foo=bar/baz\" (like .gitignore)")
|
||||||
}
|
}
|
||||||
|
|
||||||
func PackFiles(c *cobra.Command, args []string, out string) error {
|
func PackFiles(c *cobra.Command, args []string, out string) error {
|
||||||
|
return PackFiles2(c, args, nil, out)
|
||||||
|
}
|
||||||
|
|
||||||
|
func PackFiles2(c *cobra.Command, args []string, ctGlobs ctGlobList, out string) error {
|
||||||
ftp, err := filesFromList(args)
|
ftp, err := filesFromList(args)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
return packer.Pack(ftp, out)
|
ctGlobs.ApplyContentTypes(ftp)
|
||||||
|
|
||||||
|
return doPack(ftp, out)
|
||||||
}
|
}
|
||||||
|
|
||||||
func PackSpec(c *cobra.Command, spec, out string) error {
|
func PackSpec(c *cobra.Command, spec, out string) error {
|
||||||
|
@ -103,5 +140,26 @@ func PackSpec(c *cobra.Command, spec, out string) error {
|
||||||
return fmt.Errorf("parsing YAML spec %s: %v", spec, err)
|
return fmt.Errorf("parsing YAML spec %s: %v", spec, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
return packer.Pack(ftp, out)
|
return doPack(ftp, out)
|
||||||
|
}
|
||||||
|
|
||||||
|
func doPack(ftp packer.FilesToPack, out string) error {
|
||||||
|
prog := newUiProgress(ftp)
|
||||||
|
err := packer.Pack2(ftp, out, prog)
|
||||||
|
prog.Complete()
|
||||||
|
|
||||||
|
if err == nil {
|
||||||
|
fin, err := os.Open(out)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer fin.Close()
|
||||||
|
|
||||||
|
_, dir, err := packed.Load(fin)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
inspectSummary(dir)
|
||||||
|
}
|
||||||
|
return err
|
||||||
}
|
}
|
||||||
|
|
|
@ -1,3 +1,7 @@
|
||||||
|
/*
|
||||||
|
Package packer implements the core packing functionality. It is designed to be
|
||||||
|
used by a wrapper program (CLI etc.).
|
||||||
|
*/
|
||||||
package packer
|
package packer
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
@ -7,35 +11,64 @@ import (
|
||||||
"io/ioutil"
|
"io/ioutil"
|
||||||
"net/http"
|
"net/http"
|
||||||
"os"
|
"os"
|
||||||
"os/exec"
|
"runtime"
|
||||||
|
"sync"
|
||||||
"golang.org/x/sys/unix"
|
|
||||||
|
|
||||||
|
"github.com/andybalholm/brotli"
|
||||||
"github.com/foobaz/go-zopfli/zopfli"
|
"github.com/foobaz/go-zopfli/zopfli"
|
||||||
"github.com/lwithers/htpack/packed"
|
"golang.org/x/sys/unix"
|
||||||
"github.com/lwithers/pkg/writefile"
|
"src.lwithers.me.uk/go/htpack/packed"
|
||||||
|
"src.lwithers.me.uk/go/writefile"
|
||||||
)
|
)
|
||||||
|
|
||||||
var BrotliPath string = "brotli"
|
// FilesToPack is the set of files which will be incorporated into the packfile.
|
||||||
|
// The key is the path at which the file will be served, and the value gives the
|
||||||
|
// disk filename as well as headers / options.
|
||||||
type FilesToPack map[string]FileToPack
|
type FilesToPack map[string]FileToPack
|
||||||
|
|
||||||
|
// FileToPack contains the headers / options for a file which is about to be
|
||||||
|
// packed.
|
||||||
type FileToPack struct {
|
type FileToPack struct {
|
||||||
Filename string `yaml:"filename"`
|
// Filename is the path to the file on disk (relative or absolute).
|
||||||
ContentType string `yaml:"content_type"`
|
Filename string `yaml:"filename"`
|
||||||
DisableCompression bool `yaml:"disable_compression"`
|
|
||||||
DisableGzip bool `yaml:"disable_gzip"`
|
|
||||||
DisableBrotli bool `yaml:"disable_brotli"`
|
|
||||||
|
|
||||||
uncompressed, gzip, brotli packInfo
|
// ContentType is used as the Content-Type header for the source data.
|
||||||
|
ContentType string `yaml:"content_type"`
|
||||||
|
|
||||||
|
// DisableCompression can be set to skip any compression for this file.
|
||||||
|
DisableCompression bool `yaml:"disable_compression"`
|
||||||
|
|
||||||
|
// DisableGzip can be set to skip gzip compression for this file.
|
||||||
|
DisableGzip bool `yaml:"disable_gzip"`
|
||||||
|
|
||||||
|
// DisableBrotli can be set to skip brotli compression for this file.
|
||||||
|
DisableBrotli bool `yaml:"disable_brotli"`
|
||||||
}
|
}
|
||||||
|
|
||||||
type packInfo struct {
|
// Progress is a callback object which reports packing progress.
|
||||||
present bool
|
type Progress interface {
|
||||||
offset, len uint64
|
// Count reports the number of items that have begun processing.
|
||||||
|
Count(n int)
|
||||||
|
|
||||||
|
// Begin denotes the processing of an input file.
|
||||||
|
Begin(filename, compression string)
|
||||||
|
|
||||||
|
// End denotes the completion of input file processing.
|
||||||
|
End(filename, compression string)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
type ignoreProgress int
|
||||||
|
|
||||||
|
func (ignoreProgress) Count(_ int) {}
|
||||||
|
func (ignoreProgress) Begin(_, _ string) {}
|
||||||
|
func (ignoreProgress) End(_, _ string) {}
|
||||||
|
|
||||||
const (
|
const (
|
||||||
|
// minCompressionFileSize is the minimum filesize we need before
|
||||||
|
// considering compression. Note this must be at least 2, to avoid
|
||||||
|
// known bugs in go-zopfli.
|
||||||
|
minCompressionFileSize = 128
|
||||||
|
|
||||||
// minCompressionSaving means we'll only use the compressed version of
|
// minCompressionSaving means we'll only use the compressed version of
|
||||||
// the file if it's at least this many bytes smaller than the original.
|
// the file if it's at least this many bytes smaller than the original.
|
||||||
// Chosen somewhat arbitrarily; we have to add an HTTP header, and the
|
// Chosen somewhat arbitrarily; we have to add an HTTP header, and the
|
||||||
|
@ -47,16 +80,41 @@ const (
|
||||||
// smaller than the original. This is a guess at when the decompression
|
// smaller than the original. This is a guess at when the decompression
|
||||||
// overhead outweighs the time saved in transmission.
|
// overhead outweighs the time saved in transmission.
|
||||||
minCompressionFraction = 7 // i.e. files must be at least 1/128 smaller
|
minCompressionFraction = 7 // i.e. files must be at least 1/128 smaller
|
||||||
|
|
||||||
|
// padWidth is the padding alignment size expressed as a power of 2.
|
||||||
|
// The value 12 (i.e. 4096 bytes) is chosen to align with a common
|
||||||
|
// page size and filesystem block size.
|
||||||
|
padWidth = 12
|
||||||
|
|
||||||
|
// sendfileLimit is the number of bytes we can transfer through a single
|
||||||
|
// sendfile(2) call. This value is from the man page.
|
||||||
|
sendfileLimit = 0x7FFFF000
|
||||||
)
|
)
|
||||||
|
|
||||||
// Pack a file.
|
// Pack a file. Use Pack2 for progress reporting.
|
||||||
func Pack(filesToPack FilesToPack, outputFilename string) error {
|
func Pack(filesToPack FilesToPack, outputFilename string) error {
|
||||||
finalFname, outputFile, err := writefile.New(outputFilename)
|
return Pack2(filesToPack, outputFilename, nil)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Pack2 will pack a file, with progress reporting. The progress interface may
|
||||||
|
// be nil.
|
||||||
|
func Pack2(filesToPack FilesToPack, outputFilename string, progress Progress) error {
|
||||||
|
if progress == nil {
|
||||||
|
progress = ignoreProgress(0)
|
||||||
|
}
|
||||||
|
|
||||||
|
finalFname, w, err := writefile.New(outputFilename)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
defer writefile.Abort(outputFile)
|
defer writefile.Abort(w)
|
||||||
packer := &packWriter{f: outputFile}
|
|
||||||
|
// we use this little structure to serialise file writes below, and
|
||||||
|
// it has a couple of convenience methods for common operations
|
||||||
|
packer := packer{
|
||||||
|
w: w,
|
||||||
|
progress: progress,
|
||||||
|
}
|
||||||
|
|
||||||
// write initial header (will rewrite offset/length when known)
|
// write initial header (will rewrite offset/length when known)
|
||||||
hdr := &packed.Header{
|
hdr := &packed.Header{
|
||||||
|
@ -65,127 +123,94 @@ func Pack(filesToPack FilesToPack, outputFilename string) error {
|
||||||
DirectoryOffset: 1,
|
DirectoryOffset: 1,
|
||||||
DirectoryLength: 1,
|
DirectoryLength: 1,
|
||||||
}
|
}
|
||||||
m, _ := hdr.Marshal()
|
m, err := hdr.Marshal()
|
||||||
packer.Write(m)
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to marshal header (%T): %v", hdr, err)
|
||||||
|
}
|
||||||
|
if _, err = w.Write(m); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if err = packer.pad(); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
dir := packed.Directory{
|
// Channel to limit number of CPU-bound goroutines. One token is written
|
||||||
|
// to the channel for each active worker; since the channel is bounded,
|
||||||
|
// further writes will block at the limit. As workers complete, they
|
||||||
|
// consume a token from the channel.
|
||||||
|
nCPU := runtime.NumCPU() + 2 // +2 for I/O bound portions
|
||||||
|
if nCPU < 4 {
|
||||||
|
nCPU = 4
|
||||||
|
}
|
||||||
|
packer.cpus = make(chan struct{}, nCPU)
|
||||||
|
|
||||||
|
// Channel to report worker errors. Writes should be non-blocking. If
|
||||||
|
// your error is dropped, don't worry, an earlier error will be
|
||||||
|
// reported.
|
||||||
|
packer.errors = make(chan error, 1)
|
||||||
|
|
||||||
|
// Channel to abort further operations. It should be closed to abort.
|
||||||
|
// The closer should be the one who writes onto packer.errors.
|
||||||
|
packer.aborted = make(chan struct{})
|
||||||
|
|
||||||
|
// write the packed files, storing info for the directory structure
|
||||||
|
packer.dir = &packed.Directory{
|
||||||
Files: make(map[string]*packed.File),
|
Files: make(map[string]*packed.File),
|
||||||
}
|
}
|
||||||
|
|
||||||
|
var count int
|
||||||
|
PackingLoop:
|
||||||
for path, fileToPack := range filesToPack {
|
for path, fileToPack := range filesToPack {
|
||||||
info, err := packOne(packer, fileToPack)
|
select {
|
||||||
if err != nil {
|
case <-packer.aborted:
|
||||||
return err
|
// a worker reported an error; break out of loop early
|
||||||
|
break PackingLoop
|
||||||
|
default:
|
||||||
|
packer.packFile(path, fileToPack)
|
||||||
|
count++
|
||||||
|
progress.Count(count)
|
||||||
}
|
}
|
||||||
dir.Files[path] = &info
|
}
|
||||||
|
|
||||||
|
// wait for all goroutines to complete
|
||||||
|
for n := 0; n < nCPU; n++ {
|
||||||
|
packer.cpus <- struct{}{}
|
||||||
|
}
|
||||||
|
|
||||||
|
// check whether any of the just-completed goroutines returned an error
|
||||||
|
select {
|
||||||
|
case err = <-packer.errors:
|
||||||
|
return err
|
||||||
|
default:
|
||||||
}
|
}
|
||||||
|
|
||||||
// write the directory
|
// write the directory
|
||||||
if m, err = dir.Marshal(); err != nil {
|
if m, err = packer.dir.Marshal(); err != nil {
|
||||||
err = fmt.Errorf("marshaling directory object: %v", err)
|
err = fmt.Errorf("failed to marshal directory object (%T): %v",
|
||||||
|
packer.dir, err)
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
dirOffset, err := w.Seek(0, os.SEEK_CUR)
|
||||||
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
packer.Pad()
|
if _, err := w.Write(m); err != nil {
|
||||||
hdr.DirectoryOffset = packer.Pos()
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// now modify the header at the start of the file
|
||||||
|
hdr.DirectoryOffset = uint64(dirOffset)
|
||||||
hdr.DirectoryLength = uint64(len(m))
|
hdr.DirectoryLength = uint64(len(m))
|
||||||
if _, err := packer.Write(m); err != nil {
|
if m, err = hdr.Marshal(); err != nil {
|
||||||
return err
|
return fmt.Errorf("failed to marshal header (%T): %v", hdr, err)
|
||||||
}
|
}
|
||||||
|
if _, err = w.WriteAt(m, 0); err != nil {
|
||||||
// write header at start of file
|
|
||||||
m, _ = hdr.Marshal()
|
|
||||||
if _, err = outputFile.WriteAt(m, 0); err != nil {
|
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
// all done!
|
// all done!
|
||||||
return writefile.Commit(finalFname, outputFile)
|
return writefile.Commit(finalFname, w)
|
||||||
}
|
|
||||||
|
|
||||||
func packOne(packer *packWriter, fileToPack FileToPack) (info packed.File, err error) {
|
|
||||||
// implementation detail: write files at a page boundary
|
|
||||||
if err = packer.Pad(); err != nil {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// open and mmap input file
|
|
||||||
f, err := os.Open(fileToPack.Filename)
|
|
||||||
if err != nil {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
defer f.Close()
|
|
||||||
|
|
||||||
fi, err := f.Stat()
|
|
||||||
if err != nil {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
data, err := unix.Mmap(int(f.Fd()), 0, int(fi.Size()),
|
|
||||||
unix.PROT_READ, unix.MAP_SHARED)
|
|
||||||
if err != nil {
|
|
||||||
err = fmt.Errorf("mmap %s: %v", fileToPack.Filename, err)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
defer unix.Munmap(data)
|
|
||||||
|
|
||||||
info.Etag = etag(data)
|
|
||||||
info.ContentType = fileToPack.ContentType
|
|
||||||
if info.ContentType == "" {
|
|
||||||
info.ContentType = http.DetectContentType(data)
|
|
||||||
}
|
|
||||||
|
|
||||||
// copy the uncompressed version
|
|
||||||
fileData := &packed.FileData{
|
|
||||||
Offset: packer.Pos(),
|
|
||||||
Length: uint64(len(data)),
|
|
||||||
}
|
|
||||||
if _, err = packer.CopyFrom(f, fi); err != nil {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
info.Uncompressed = fileData
|
|
||||||
|
|
||||||
if fileToPack.DisableCompression {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// gzip compression
|
|
||||||
if !fileToPack.DisableGzip {
|
|
||||||
if err = packer.Pad(); err != nil {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
fileData = &packed.FileData{
|
|
||||||
Offset: packer.Pos(),
|
|
||||||
}
|
|
||||||
fileData.Length, err = packOneGzip(packer, data,
|
|
||||||
info.Uncompressed.Length)
|
|
||||||
if err != nil {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
if fileData.Length > 0 {
|
|
||||||
info.Gzip = fileData
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// brotli compression
|
|
||||||
if BrotliPath != "" && !fileToPack.DisableBrotli {
|
|
||||||
if err = packer.Pad(); err != nil {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
fileData = &packed.FileData{
|
|
||||||
Offset: packer.Pos(),
|
|
||||||
}
|
|
||||||
fileData.Length, err = packOneBrotli(packer,
|
|
||||||
fileToPack.Filename, info.Uncompressed.Length)
|
|
||||||
if err != nil {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
if fileData.Length > 0 {
|
|
||||||
info.Brotli = fileData
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func etag(in []byte) string {
|
func etag(in []byte) string {
|
||||||
|
@ -194,12 +219,233 @@ func etag(in []byte) string {
|
||||||
return fmt.Sprintf(`"1--%x"`, h.Sum(nil))
|
return fmt.Sprintf(`"1--%x"`, h.Sum(nil))
|
||||||
}
|
}
|
||||||
|
|
||||||
func packOneGzip(packer *packWriter, data []byte, uncompressedSize uint64,
|
func compressionWorthwhile(data []byte, compressed os.FileInfo) bool {
|
||||||
) (uint64, error) {
|
uncompressedSize := uint64(len(data))
|
||||||
|
sz := uint64(compressed.Size())
|
||||||
|
|
||||||
|
switch {
|
||||||
|
case sz+minCompressionSaving > uncompressedSize,
|
||||||
|
sz+(uncompressedSize>>minCompressionFraction) > uncompressedSize:
|
||||||
|
return false
|
||||||
|
default:
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// packer packs input files into the output file. It has methods for each type
|
||||||
|
// of compression. Unexported methods assume they are called in a context where
|
||||||
|
// the lock is not needed or already taken; exported methods take the lock.
|
||||||
|
type packer struct {
|
||||||
|
w *os.File
|
||||||
|
lock sync.Mutex
|
||||||
|
cpus chan struct{}
|
||||||
|
errors chan error
|
||||||
|
aborted chan struct{}
|
||||||
|
dir *packed.Directory
|
||||||
|
progress Progress
|
||||||
|
}
|
||||||
|
|
||||||
|
// pad will move the file write pointer to the next padding boundary. It is not
|
||||||
|
// concurrency safe.
|
||||||
|
func (p *packer) pad() error {
|
||||||
|
pos, err := p.w.Seek(0, os.SEEK_CUR)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
pos &= (1 << padWidth) - 1
|
||||||
|
if pos == 0 { // already aligned
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err = p.w.Seek((1<<padWidth)-pos, os.SEEK_CUR)
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// appendPath will copy file data from srcPath and append it to the output file. The
|
||||||
|
// offset and length are stored in ‘data’ on success. It is not concurrency safe.
|
||||||
|
func (p *packer) appendPath(srcPath string, data *packed.FileData) error {
|
||||||
|
// open the input file and grab its length
|
||||||
|
in, err := os.Open(srcPath)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer in.Close()
|
||||||
|
|
||||||
|
fi, err := in.Stat()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// copy in the file data
|
||||||
|
return p.appendFile(in, fi.Size(), data)
|
||||||
|
}
|
||||||
|
|
||||||
|
// appendFile will copy file data from src and append it to the output file. The
|
||||||
|
// offset and length are stored in ‘data’ on success. It is not concurrency safe.
|
||||||
|
func (p *packer) appendFile(src *os.File, srcLen int64, data *packed.FileData) error {
|
||||||
|
// retrieve current file position and store in data.Offset
|
||||||
|
off, err := p.w.Seek(0, os.SEEK_CUR)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
data.Length = uint64(srcLen)
|
||||||
|
data.Offset = uint64(off)
|
||||||
|
|
||||||
|
// copy in the file data
|
||||||
|
remain := srcLen
|
||||||
|
off = 0
|
||||||
|
for remain > 0 {
|
||||||
|
var amt int
|
||||||
|
if remain > sendfileLimit {
|
||||||
|
amt = sendfileLimit
|
||||||
|
} else {
|
||||||
|
amt = int(remain)
|
||||||
|
}
|
||||||
|
|
||||||
|
amt, err := unix.Sendfile(int(p.w.Fd()), int(src.Fd()), &off, amt)
|
||||||
|
remain -= int64(amt)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("sendfile (copying data to "+
|
||||||
|
"htpack): %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// leave output file padded to next boundary
|
||||||
|
return p.pad()
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p *packer) packFile(path string, fileToPack FileToPack) {
|
||||||
|
// open and mmap input file
|
||||||
|
f, err := os.Open(fileToPack.Filename)
|
||||||
|
if err != nil {
|
||||||
|
p.Abort(err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
defer f.Close()
|
||||||
|
|
||||||
|
fi, err := f.Stat()
|
||||||
|
if err != nil {
|
||||||
|
p.Abort(err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
var data []byte
|
||||||
|
if fi.Size() > 0 {
|
||||||
|
data, err = unix.Mmap(int(f.Fd()), 0, int(fi.Size()),
|
||||||
|
unix.PROT_READ, unix.MAP_SHARED)
|
||||||
|
if err != nil {
|
||||||
|
p.Abort(fmt.Errorf("mmap %s: %v", fileToPack.Filename, err))
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// prepare initial directory entry
|
||||||
|
info := &packed.File{
|
||||||
|
Etag: etag(data),
|
||||||
|
ContentType: fileToPack.ContentType,
|
||||||
|
}
|
||||||
|
if info.ContentType == "" {
|
||||||
|
info.ContentType = http.DetectContentType(data)
|
||||||
|
}
|
||||||
|
p.dir.Files[path] = info // NB: this part is not concurrent, so no mutex
|
||||||
|
|
||||||
|
// list of operations on this input file that we'll carry out asynchronously
|
||||||
|
ops := []func() error{
|
||||||
|
func() error {
|
||||||
|
p.progress.Begin(fileToPack.Filename, "uncompressed")
|
||||||
|
defer p.progress.End(fileToPack.Filename, "uncompressed")
|
||||||
|
return p.Uncompressed(fileToPack.Filename, info)
|
||||||
|
},
|
||||||
|
}
|
||||||
|
if !fileToPack.DisableCompression && !fileToPack.DisableGzip {
|
||||||
|
ops = append(ops, func() error {
|
||||||
|
p.progress.Begin(fileToPack.Filename, "gzip")
|
||||||
|
defer p.progress.End(fileToPack.Filename, "gzip")
|
||||||
|
if err := p.Gzip(data, info); err != nil {
|
||||||
|
return fmt.Errorf("gzip compression of %s "+
|
||||||
|
"failed: %w", fileToPack.Filename, err)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
})
|
||||||
|
}
|
||||||
|
if !fileToPack.DisableCompression && !fileToPack.DisableBrotli {
|
||||||
|
ops = append(ops, func() error {
|
||||||
|
p.progress.Begin(fileToPack.Filename, "brotli")
|
||||||
|
defer p.progress.End(fileToPack.Filename, "brotli")
|
||||||
|
if err := p.Brotli(data, info); err != nil {
|
||||||
|
return fmt.Errorf("brotli compression of %s "+
|
||||||
|
"failed: %w", fileToPack.Filename, err)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
// we have multiple operations on the file, and we need to wait for
|
||||||
|
// them all to complete before munmap()
|
||||||
|
wg := new(sync.WaitGroup)
|
||||||
|
wg.Add(len(ops))
|
||||||
|
go func() {
|
||||||
|
wg.Wait()
|
||||||
|
unix.Munmap(data)
|
||||||
|
}()
|
||||||
|
|
||||||
|
for _, op := range ops {
|
||||||
|
select {
|
||||||
|
case <-p.aborted:
|
||||||
|
// skip the operation
|
||||||
|
wg.Done()
|
||||||
|
|
||||||
|
case p.cpus <- struct{}{}:
|
||||||
|
go func(op func() error) {
|
||||||
|
if err := op(); err != nil {
|
||||||
|
p.Abort(err)
|
||||||
|
}
|
||||||
|
// release CPU token
|
||||||
|
<-p.cpus
|
||||||
|
wg.Done()
|
||||||
|
}(op)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Abort records that an error occurred and records it onto the errors channel.
|
||||||
|
// It signals workers to abort by closed the aborted channel. If called
|
||||||
|
// multiple times, only one error will be recorded, and the aborted channel will
|
||||||
|
// only be closed once.
|
||||||
|
func (p *packer) Abort(err error) {
|
||||||
|
select {
|
||||||
|
case p.errors <- err:
|
||||||
|
// only one error can be written to this channel, so the write
|
||||||
|
// acts as a lock to ensure only a single close operation takes
|
||||||
|
// place
|
||||||
|
close(p.aborted)
|
||||||
|
default:
|
||||||
|
// errors channel was already written, so we're already aborted
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Uncompressed copies in an uncompressed file.
|
||||||
|
func (p *packer) Uncompressed(srcPath string, dir *packed.File) error {
|
||||||
|
dir.Uncompressed = new(packed.FileData)
|
||||||
|
p.lock.Lock()
|
||||||
|
defer p.lock.Unlock()
|
||||||
|
return p.appendPath(srcPath, dir.Uncompressed)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Gzip will gzip input data to a temporary file, and then append that to the
|
||||||
|
// output file.
|
||||||
|
func (p *packer) Gzip(data []byte, dir *packed.File) error {
|
||||||
|
if len(data) < minCompressionFileSize {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
// write via temporary file
|
// write via temporary file
|
||||||
tmpfile, err := ioutil.TempFile("", "")
|
tmpfile, err := ioutil.TempFile("", "")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return 0, err
|
return err
|
||||||
}
|
}
|
||||||
defer os.Remove(tmpfile.Name())
|
defer os.Remove(tmpfile.Name())
|
||||||
defer tmpfile.Close()
|
defer tmpfile.Close()
|
||||||
|
@ -212,128 +458,69 @@ func packOneGzip(packer *packWriter, data []byte, uncompressedSize uint64,
|
||||||
|
|
||||||
buf := bufio.NewWriter(tmpfile)
|
buf := bufio.NewWriter(tmpfile)
|
||||||
if err = zopfli.GzipCompress(&opts, data, buf); err != nil {
|
if err = zopfli.GzipCompress(&opts, data, buf); err != nil {
|
||||||
return 0, err
|
return err
|
||||||
}
|
}
|
||||||
if err = buf.Flush(); err != nil {
|
if err = buf.Flush(); err != nil {
|
||||||
return 0, err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
// copy into packfile
|
// grab file length, evaluate whether compression is worth it
|
||||||
return packer.CopyIfSaving(tmpfile, uncompressedSize)
|
fi, err := tmpfile.Stat()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if !compressionWorthwhile(data, fi) {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// save the compressed data
|
||||||
|
dir.Gzip = new(packed.FileData)
|
||||||
|
p.lock.Lock()
|
||||||
|
defer p.lock.Unlock()
|
||||||
|
return p.appendFile(tmpfile, fi.Size(), dir.Gzip)
|
||||||
}
|
}
|
||||||
|
|
||||||
func packOneBrotli(packer *packWriter, filename string, uncompressedSize uint64,
|
// Brotli will compress input data to a temporary file, and then append that to
|
||||||
) (uint64, error) {
|
// the output file.
|
||||||
|
func (p *packer) Brotli(data []byte, dir *packed.File) error {
|
||||||
|
if len(data) < minCompressionFileSize {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
// write via temporary file
|
// write via temporary file
|
||||||
tmpfile, err := ioutil.TempFile("", "")
|
tmpfile, err := ioutil.TempFile("", "")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return 0, err
|
return err
|
||||||
}
|
}
|
||||||
defer os.Remove(tmpfile.Name())
|
defer os.Remove(tmpfile.Name())
|
||||||
defer tmpfile.Close()
|
defer tmpfile.Close()
|
||||||
|
|
||||||
// compress via commandline
|
// compress
|
||||||
cmd := exec.Command(BrotliPath, "--input", filename,
|
buf := bufio.NewWriter(tmpfile)
|
||||||
"--output", tmpfile.Name())
|
comp := brotli.NewWriterOptions(buf, brotli.WriterOptions{
|
||||||
out, err := cmd.CombinedOutput()
|
Quality: 11,
|
||||||
|
})
|
||||||
|
if _, err = comp.Write(data); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if err = comp.Close(); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if err = buf.Flush(); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// grab file length, evaluate whether compression is worth it
|
||||||
|
fi, err := tmpfile.Stat()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
err = fmt.Errorf("brotli: %v (process reported: %s)", err, out)
|
return err
|
||||||
return 0, err
|
}
|
||||||
|
if !compressionWorthwhile(data, fi) {
|
||||||
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// copy into packfile
|
// save the compressed data
|
||||||
return packer.CopyIfSaving(tmpfile, uncompressedSize)
|
dir.Brotli = new(packed.FileData)
|
||||||
}
|
p.lock.Lock()
|
||||||
|
defer p.lock.Unlock()
|
||||||
type packWriter struct {
|
return p.appendFile(tmpfile, fi.Size(), dir.Brotli)
|
||||||
f *os.File
|
|
||||||
err error
|
|
||||||
}
|
|
||||||
|
|
||||||
func (pw *packWriter) Write(buf []byte) (int, error) {
|
|
||||||
if pw.err != nil {
|
|
||||||
return 0, pw.err
|
|
||||||
}
|
|
||||||
n, err := pw.f.Write(buf)
|
|
||||||
pw.err = err
|
|
||||||
return n, err
|
|
||||||
}
|
|
||||||
|
|
||||||
func (pw *packWriter) Pos() uint64 {
|
|
||||||
pos, err := pw.f.Seek(0, os.SEEK_CUR)
|
|
||||||
if err != nil {
|
|
||||||
pw.err = err
|
|
||||||
}
|
|
||||||
return uint64(pos)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (pw *packWriter) Pad() error {
|
|
||||||
if pw.err != nil {
|
|
||||||
return pw.err
|
|
||||||
}
|
|
||||||
|
|
||||||
pos, err := pw.f.Seek(0, os.SEEK_CUR)
|
|
||||||
if err != nil {
|
|
||||||
pw.err = err
|
|
||||||
return pw.err
|
|
||||||
}
|
|
||||||
|
|
||||||
pos &= 0xFFF
|
|
||||||
if pos == 0 {
|
|
||||||
return pw.err
|
|
||||||
}
|
|
||||||
|
|
||||||
if _, err = pw.f.Seek(4096-pos, os.SEEK_CUR); err != nil {
|
|
||||||
pw.err = err
|
|
||||||
}
|
|
||||||
return pw.err
|
|
||||||
}
|
|
||||||
|
|
||||||
func (pw *packWriter) CopyIfSaving(in *os.File, uncompressedSize uint64) (uint64, error) {
|
|
||||||
if pw.err != nil {
|
|
||||||
return 0, pw.err
|
|
||||||
}
|
|
||||||
|
|
||||||
fi, err := in.Stat()
|
|
||||||
if err != nil {
|
|
||||||
pw.err = err
|
|
||||||
return 0, pw.err
|
|
||||||
}
|
|
||||||
sz := uint64(fi.Size())
|
|
||||||
|
|
||||||
if sz+minCompressionSaving > uncompressedSize {
|
|
||||||
return 0, nil
|
|
||||||
}
|
|
||||||
if sz+(uncompressedSize>>minCompressionFraction) > uncompressedSize {
|
|
||||||
return 0, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
return pw.CopyFrom(in, fi)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (pw *packWriter) CopyFrom(in *os.File, fi os.FileInfo) (uint64, error) {
|
|
||||||
if pw.err != nil {
|
|
||||||
return 0, pw.err
|
|
||||||
}
|
|
||||||
|
|
||||||
var off int64
|
|
||||||
remain := fi.Size()
|
|
||||||
for remain > 0 {
|
|
||||||
var amt int
|
|
||||||
if remain > (1 << 30) {
|
|
||||||
amt = (1 << 30)
|
|
||||||
} else {
|
|
||||||
amt = int(remain)
|
|
||||||
}
|
|
||||||
|
|
||||||
amt, err := unix.Sendfile(int(pw.f.Fd()), int(in.Fd()), &off, amt)
|
|
||||||
remain -= int64(amt)
|
|
||||||
if err != nil {
|
|
||||||
pw.err = fmt.Errorf("sendfile (copying data to "+
|
|
||||||
"htpack): %v", err)
|
|
||||||
return uint64(off), pw.err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return uint64(off), nil
|
|
||||||
}
|
}
|
||||||
|
|
|
@ -0,0 +1,131 @@
|
||||||
|
package main
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"slices"
|
||||||
|
"sync"
|
||||||
|
|
||||||
|
"github.com/gosuri/uiprogress"
|
||||||
|
"src.lwithers.me.uk/go/htpack/cmd/htpacker/packer"
|
||||||
|
)
|
||||||
|
|
||||||
|
type uiProgress struct {
|
||||||
|
p *uiprogress.Progress
|
||||||
|
uncompressed, gzip, brotli *uiProgressBar
|
||||||
|
}
|
||||||
|
|
||||||
|
func newUiProgress(ftp packer.FilesToPack) *uiProgress {
|
||||||
|
up := &uiProgress{
|
||||||
|
p: uiprogress.New(),
|
||||||
|
}
|
||||||
|
|
||||||
|
up.uncompressed = newUiProgressBar(up.p, len(ftp), "uncompressed")
|
||||||
|
var nGzip, nBrotli int
|
||||||
|
for _, f := range ftp {
|
||||||
|
if !f.DisableCompression && !f.DisableGzip {
|
||||||
|
nGzip++
|
||||||
|
}
|
||||||
|
if !f.DisableCompression && !f.DisableBrotli {
|
||||||
|
nBrotli++
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if nGzip > 0 {
|
||||||
|
up.gzip = newUiProgressBar(up.p, nGzip, "gzip")
|
||||||
|
}
|
||||||
|
if nBrotli > 0 {
|
||||||
|
up.brotli = newUiProgressBar(up.p, nGzip, "brotli")
|
||||||
|
}
|
||||||
|
|
||||||
|
up.p.Start()
|
||||||
|
return up
|
||||||
|
}
|
||||||
|
|
||||||
|
func (up *uiProgress) Count(_ int) {
|
||||||
|
}
|
||||||
|
|
||||||
|
func (up *uiProgress) Begin(filename, compression string) {
|
||||||
|
up.bar(compression).begin(filename)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (up *uiProgress) End(filename, compression string) {
|
||||||
|
up.bar(compression).end(filename)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (up *uiProgress) bar(compression string) *uiProgressBar {
|
||||||
|
switch compression {
|
||||||
|
case "uncompressed":
|
||||||
|
return up.uncompressed
|
||||||
|
case "gzip":
|
||||||
|
return up.gzip
|
||||||
|
case "brotli":
|
||||||
|
return up.brotli
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (up *uiProgress) Complete() {
|
||||||
|
up.p.Stop()
|
||||||
|
}
|
||||||
|
|
||||||
|
type uiProgressBar struct {
|
||||||
|
bar *uiprogress.Bar
|
||||||
|
lock sync.Mutex
|
||||||
|
inflight []string
|
||||||
|
}
|
||||||
|
|
||||||
|
func newUiProgressBar(p *uiprogress.Progress, total int, compression string) *uiProgressBar {
|
||||||
|
bar := &uiProgressBar{
|
||||||
|
bar: p.AddBar(total).AppendCompleted(),
|
||||||
|
}
|
||||||
|
|
||||||
|
var buf bytes.Buffer
|
||||||
|
bar.bar.PrependFunc(func(*uiprogress.Bar) string {
|
||||||
|
bar.lock.Lock()
|
||||||
|
defer bar.lock.Unlock()
|
||||||
|
buf.Reset()
|
||||||
|
buf.WriteString(compression)
|
||||||
|
if len(bar.inflight) > 0 {
|
||||||
|
buf.WriteString(" (")
|
||||||
|
for i, f := range bar.inflight {
|
||||||
|
if i > 0 {
|
||||||
|
buf.WriteString(", ")
|
||||||
|
}
|
||||||
|
buf.WriteString(f)
|
||||||
|
}
|
||||||
|
buf.WriteRune(')')
|
||||||
|
}
|
||||||
|
if buf.Len() < 40 {
|
||||||
|
buf.WriteString(" ")
|
||||||
|
buf.Truncate(40)
|
||||||
|
} else if buf.Len() > 40 {
|
||||||
|
buf.Truncate(39)
|
||||||
|
buf.WriteString("…")
|
||||||
|
}
|
||||||
|
return buf.String()
|
||||||
|
})
|
||||||
|
|
||||||
|
return bar
|
||||||
|
}
|
||||||
|
|
||||||
|
func (bar *uiProgressBar) begin(filename string) {
|
||||||
|
if bar == nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
bar.lock.Lock()
|
||||||
|
defer bar.lock.Unlock()
|
||||||
|
|
||||||
|
bar.inflight = append(bar.inflight, filename)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (bar *uiProgressBar) end(filename string) {
|
||||||
|
if bar == nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
bar.lock.Lock()
|
||||||
|
defer bar.lock.Unlock()
|
||||||
|
|
||||||
|
bar.bar.Incr()
|
||||||
|
if idx := slices.Index(bar.inflight, filename); idx != -1 {
|
||||||
|
bar.inflight = slices.Delete(bar.inflight, idx, idx+1)
|
||||||
|
}
|
||||||
|
}
|
|
@ -3,15 +3,16 @@ package main
|
||||||
import (
|
import (
|
||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"io"
|
||||||
"io/ioutil"
|
"io/ioutil"
|
||||||
"net/http"
|
"net/http"
|
||||||
"os"
|
"os"
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
"github.com/lwithers/htpack/cmd/htpacker/packer"
|
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
yaml "gopkg.in/yaml.v2"
|
yaml "gopkg.in/yaml.v2"
|
||||||
|
"src.lwithers.me.uk/go/htpack/cmd/htpacker/packer"
|
||||||
)
|
)
|
||||||
|
|
||||||
var yamlCmd = &cobra.Command{
|
var yamlCmd = &cobra.Command{
|
||||||
|
@ -125,13 +126,22 @@ func filesFromListR(prefix, arg string, ftp packer.FilesToPack) error {
|
||||||
|
|
||||||
case fi.Mode().IsRegular():
|
case fi.Mode().IsRegular():
|
||||||
// sniff content type
|
// sniff content type
|
||||||
|
var ctype string
|
||||||
buf := make([]byte, 512)
|
buf := make([]byte, 512)
|
||||||
n, err := f.Read(buf)
|
n, err := f.Read(buf)
|
||||||
if err != nil {
|
switch err {
|
||||||
return err
|
case nil:
|
||||||
|
buf = buf[:n]
|
||||||
|
ctype = http.DetectContentType(buf)
|
||||||
|
|
||||||
|
case io.EOF:
|
||||||
|
// Empty file; this is typically due to things like
|
||||||
|
// npm webpack producing empty .css files.
|
||||||
|
ctype = "text/plain; charset=UTF-8"
|
||||||
|
|
||||||
|
default:
|
||||||
|
return fmt.Errorf("failed to read %s: %v", arg, err)
|
||||||
}
|
}
|
||||||
buf = buf[:n]
|
|
||||||
ctype := http.DetectContentType(buf)
|
|
||||||
|
|
||||||
// augmented rules for JS / CSS / etc.
|
// augmented rules for JS / CSS / etc.
|
||||||
switch {
|
switch {
|
||||||
|
@ -140,9 +150,11 @@ func filesFromListR(prefix, arg string, ftp packer.FilesToPack) error {
|
||||||
case ".css":
|
case ".css":
|
||||||
ctype = "text/css"
|
ctype = "text/css"
|
||||||
case ".js":
|
case ".js":
|
||||||
ctype = "application/javascript"
|
ctype = "text/javascript"
|
||||||
case ".json":
|
case ".json":
|
||||||
ctype = "application/json"
|
ctype = "application/json"
|
||||||
|
case ".svg":
|
||||||
|
ctype = "image/svg+xml"
|
||||||
}
|
}
|
||||||
|
|
||||||
case strings.HasPrefix(ctype, "text/xml"):
|
case strings.HasPrefix(ctype, "text/xml"):
|
||||||
|
|
|
@ -1,14 +1,15 @@
|
||||||
module github.com/lwithers/htpack/cmd/packserver
|
module src.lwithers.me.uk/go/htpack/cmd/packserver
|
||||||
|
|
||||||
go 1.12
|
go 1.22
|
||||||
|
|
||||||
require (
|
require (
|
||||||
github.com/inconshreveable/mousetrap v1.0.0 // indirect
|
github.com/spf13/cobra v1.8.1
|
||||||
github.com/kisielk/errcheck v1.2.0 // indirect
|
src.lwithers.me.uk/go/htpack v1.3.3
|
||||||
github.com/lwithers/htpack v1.1.1
|
)
|
||||||
github.com/spf13/cobra v0.0.3
|
|
||||||
github.com/spf13/pflag v1.0.3 // indirect
|
require (
|
||||||
golang.org/x/crypto v0.0.0-20190411191339-88737f569e3a // indirect
|
github.com/gogo/protobuf v1.3.2 // indirect
|
||||||
golang.org/x/net v0.0.0-20190415100556-4a65cf94b679 // indirect
|
github.com/inconshreveable/mousetrap v1.1.0 // indirect
|
||||||
golang.org/x/tools v0.0.0-20190411180116-681f9ce8ac52 // indirect
|
github.com/spf13/pflag v1.0.5 // indirect
|
||||||
|
golang.org/x/sys v0.22.0 // indirect
|
||||||
)
|
)
|
||||||
|
|
|
@ -1,31 +1,45 @@
|
||||||
github.com/gogo/protobuf v1.2.1 h1:/s5zKNz0uPFCZ5hddgPdo2TK2TVrUNMn0OOX8/aZMTE=
|
github.com/cpuguy83/go-md2man/v2 v2.0.4/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
|
||||||
github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4=
|
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
|
||||||
github.com/inconshreveable/mousetrap v1.0.0 h1:Z8tu5sraLXCXIcARxBp/8cbvlwVa7Z1NHg9XEKhtSvM=
|
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
|
||||||
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
|
github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
|
||||||
github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q=
|
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
|
||||||
github.com/kisielk/errcheck v1.2.0/go.mod h1:/BMXB+zMLi60iA8Vv6Ksmxu/1UDYcXs4uQLJ+jE2L00=
|
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
|
||||||
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
|
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
|
||||||
github.com/lwithers/htpack v0.0.0-20190412081623-ea77f42dc393 h1:h++VdZ2eeJC9hf+W+LTVsYdYclJZcz6H5DYAMtGfzBA=
|
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
|
||||||
github.com/lwithers/htpack v0.0.0-20190412081623-ea77f42dc393/go.mod h1:+9noAoJ9IIiHkwn2Z2Po5upZOKItKKFgYr/cMESGYrc=
|
github.com/spf13/cobra v1.8.1 h1:e5/vxKd/rZsfSJMUX1agtjeTDf+qv1/JdBF8gg5k9ZM=
|
||||||
github.com/lwithers/htpack v1.0.0 h1:opBavUAl6QKjvlxNaOwMAvO+Q+ytZpKSl0iDCYam1Uk=
|
github.com/spf13/cobra v1.8.1/go.mod h1:wHxEcudfqmLYa8iTfL+OuZPbBZkmvliBWKIezN3kD9Y=
|
||||||
github.com/lwithers/htpack v1.0.0/go.mod h1:4dNHChTcK0SzOTVnFt4b0SuK7OMSo8Ge7o1XXYV4xUk=
|
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
|
||||||
github.com/lwithers/htpack v1.1.0 h1:pURTwBKgcmLYpN8M+qT9/Ks2+kLy8cbQqgJZa6/QPaw=
|
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
|
||||||
github.com/lwithers/htpack v1.1.0/go.mod h1:4dNHChTcK0SzOTVnFt4b0SuK7OMSo8Ge7o1XXYV4xUk=
|
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
|
||||||
github.com/spf13/cobra v0.0.3 h1:ZlrZ4XsMRm04Fr5pSFxBgfND2EBVa1nLpiy1stUsX/8=
|
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
|
||||||
github.com/spf13/cobra v0.0.3/go.mod h1:1l0Ry5zgKvJasoi3XT1TypsSe7PqH0Sj9dhYf7v3XqQ=
|
|
||||||
github.com/spf13/pflag v1.0.3 h1:zPAT6CGy6wXeQ7NtTnaTerfKOsV6V6F8agHXFiazDkg=
|
|
||||||
github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
|
|
||||||
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
|
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
|
||||||
golang.org/x/crypto v0.0.0-20190411191339-88737f569e3a/go.mod h1:WFFai1msRO1wXaEeE5yQxYXgSfI8pQAWXbQop6sCtWE=
|
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||||
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||||
golang.org/x/net v0.0.0-20190415100556-4a65cf94b679/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
|
||||||
golang.org/x/sys v0.0.0-20180924175946-90868a75fefd h1:ELJRxcWg6//yYBDjuf/SnMg1+X0jj5+BP5xXF31wl4w=
|
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
|
||||||
golang.org/x/sys v0.0.0-20180924175946-90868a75fefd/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||||
|
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||||
|
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||||
|
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
|
||||||
|
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||||
|
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||||
|
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||||
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||||
golang.org/x/sys v0.0.0-20190403152447-81d4e9dc473e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
golang.org/x/sys v0.0.0-20190415081028-16da32be82c5 h1:UMbOtg4ZL2GyTAolLE9QfNvzskWvFkI935Z98i9moXA=
|
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
golang.org/x/sys v0.0.0-20190415081028-16da32be82c5/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
golang.org/x/sys v0.22.0 h1:RI27ohtqKCnwULzJLqkv897zojh5/DwS/ENaMzUOaWI=
|
||||||
|
golang.org/x/sys v0.22.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
|
||||||
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||||
golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||||
golang.org/x/tools v0.0.0-20181030221726-6c7e314b6563/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||||
golang.org/x/tools v0.0.0-20190411180116-681f9ce8ac52/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
|
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||||
|
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
|
||||||
|
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
|
||||||
|
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||||
|
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||||
|
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||||
|
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||||
|
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||||
|
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||||
|
src.lwithers.me.uk/go/htpack v1.3.3 h1:Xvl6qR9HfSblmCgPyu+ACQ9o3aLQSIy3l8CrMbzj/jc=
|
||||||
|
src.lwithers.me.uk/go/htpack v1.3.3/go.mod h1:qKgCBgZ6iiiuYOxZkYOPVpXLBzp6gXEd4A0ksxgR6Nk=
|
||||||
|
|
|
@ -5,15 +5,21 @@ package main
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"bufio"
|
"bufio"
|
||||||
|
"context"
|
||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
"net/http"
|
"net/http"
|
||||||
"os"
|
"os"
|
||||||
|
"os/signal"
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
|
"sort"
|
||||||
"strings"
|
"strings"
|
||||||
|
"sync/atomic"
|
||||||
|
"syscall"
|
||||||
|
"time"
|
||||||
|
|
||||||
"github.com/lwithers/htpack"
|
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
|
"src.lwithers.me.uk/go/htpack"
|
||||||
)
|
)
|
||||||
|
|
||||||
var rootCmd = &cobra.Command{
|
var rootCmd = &cobra.Command{
|
||||||
|
@ -49,6 +55,12 @@ func main() {
|
||||||
"Name of index file (index.html or similar)")
|
"Name of index file (index.html or similar)")
|
||||||
rootCmd.Flags().Duration("expiry", 0,
|
rootCmd.Flags().Duration("expiry", 0,
|
||||||
"Tell client how long it can cache data for; 0 means no caching")
|
"Tell client how long it can cache data for; 0 means no caching")
|
||||||
|
rootCmd.Flags().String("fallback-404", "",
|
||||||
|
"Name of file to return if response would be 404 (spa.html or similar)")
|
||||||
|
rootCmd.Flags().String("frames", "sameorigin",
|
||||||
|
"Override X-Frame-Options header (can be sameorigin, deny, allow)")
|
||||||
|
rootCmd.Flags().Duration("graceful-shutdown-delay", 3*time.Second,
|
||||||
|
"Number of seconds to wait after receiving SIGTERM before initiating graceful shutdown")
|
||||||
|
|
||||||
if err := rootCmd.Execute(); err != nil {
|
if err := rootCmd.Execute(); err != nil {
|
||||||
fmt.Fprintln(os.Stderr, err)
|
fmt.Fprintln(os.Stderr, err)
|
||||||
|
@ -80,6 +92,23 @@ func run(c *cobra.Command, args []string) error {
|
||||||
certFile = keyFile
|
certFile = keyFile
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// parse frames header
|
||||||
|
framesHeader := "SAMEORIGIN"
|
||||||
|
frames, err := c.Flags().GetString("frames")
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
switch frames {
|
||||||
|
case "sameorigin":
|
||||||
|
framesHeader = "SAMEORIGIN"
|
||||||
|
case "allow":
|
||||||
|
framesHeader = ""
|
||||||
|
case "deny":
|
||||||
|
framesHeader = "DENY"
|
||||||
|
default:
|
||||||
|
return errors.New("--frames must be one of sameorigin, deny, allow")
|
||||||
|
}
|
||||||
|
|
||||||
// parse extra headers
|
// parse extra headers
|
||||||
extraHeaders := make(http.Header)
|
extraHeaders := make(http.Header)
|
||||||
hdrs, err := c.Flags().GetStringSlice("header")
|
hdrs, err := c.Flags().GetStringSlice("header")
|
||||||
|
@ -124,6 +153,21 @@ func run(c *cobra.Command, args []string) error {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// optional 404 fallback file
|
||||||
|
fallback404File, err := c.Flags().GetString("fallback-404")
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// graceful shutdown delay must be > 0
|
||||||
|
gracefulShutdownDelay, err := c.Flags().GetDuration("graceful-shutdown-delay")
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if gracefulShutdownDelay <= 0 {
|
||||||
|
return errors.New("graceful shutdown delay must be > 0s")
|
||||||
|
}
|
||||||
|
|
||||||
// verify .htpack specifications
|
// verify .htpack specifications
|
||||||
if len(args) == 0 {
|
if len(args) == 0 {
|
||||||
return errors.New("must specify one or more .htpack files")
|
return errors.New("must specify one or more .htpack files")
|
||||||
|
@ -149,6 +193,7 @@ func run(c *cobra.Command, args []string) error {
|
||||||
}
|
}
|
||||||
|
|
||||||
// load packfiles, registering handlers as we go
|
// load packfiles, registering handlers as we go
|
||||||
|
router := &routerHandler{}
|
||||||
for prefix, packfile := range packPaths {
|
for prefix, packfile := range packPaths {
|
||||||
packHandler, err := htpack.New(packfile)
|
packHandler, err := htpack.New(packfile)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
@ -157,27 +202,65 @@ func run(c *cobra.Command, args []string) error {
|
||||||
if indexFile != "" {
|
if indexFile != "" {
|
||||||
packHandler.SetIndex(indexFile)
|
packHandler.SetIndex(indexFile)
|
||||||
}
|
}
|
||||||
|
if err = packHandler.SetNotFound(fallback404File); err != nil {
|
||||||
|
return fmt.Errorf("%s: fallback-404 resource %q "+
|
||||||
|
"not found in packfile", prefix, fallback404File)
|
||||||
|
}
|
||||||
|
packHandler.SetHeader("X-Frame-Options", framesHeader)
|
||||||
|
|
||||||
handler := &addHeaders{
|
var handler http.Handler = &addHeaders{
|
||||||
extraHeaders: extraHeaders,
|
extraHeaders: extraHeaders,
|
||||||
handler: packHandler,
|
handler: packHandler,
|
||||||
}
|
}
|
||||||
|
|
||||||
if prefix != "/" {
|
if prefix != "/" {
|
||||||
http.Handle(prefix+"/",
|
handler = http.StripPrefix(prefix, handler)
|
||||||
http.StripPrefix(prefix, handler))
|
|
||||||
} else {
|
|
||||||
http.Handle("/", handler)
|
|
||||||
}
|
}
|
||||||
|
router.AddRoute(prefix, handler)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// HTTP server object setup
|
||||||
|
sv := &http.Server{
|
||||||
|
Addr: bindAddr,
|
||||||
|
Handler: router,
|
||||||
|
}
|
||||||
|
|
||||||
|
// register SIGINT, SIGTERM handler
|
||||||
|
sigch := make(chan os.Signal, 1)
|
||||||
|
signal.Notify(sigch, syscall.SIGINT, syscall.SIGTERM)
|
||||||
|
var (
|
||||||
|
// if we are shut down by a signal, then http.ListenAndServe()
|
||||||
|
// returns straight away, but we actually need to wait for
|
||||||
|
// Shutdown() to complete prior to returning / exiting
|
||||||
|
isSignalled atomic.Bool
|
||||||
|
signalDone = make(chan struct{})
|
||||||
|
)
|
||||||
|
go func() {
|
||||||
|
<-sigch
|
||||||
|
time.Sleep(gracefulShutdownDelay)
|
||||||
|
isSignalled.Store(true)
|
||||||
|
shutctx, shutcancel := context.WithTimeout(context.Background(), gracefulShutdownDelay)
|
||||||
|
sv.Shutdown(shutctx)
|
||||||
|
shutcancel()
|
||||||
|
close(signalDone)
|
||||||
|
}()
|
||||||
|
|
||||||
// main server loop
|
// main server loop
|
||||||
if keyFile == "" {
|
if keyFile == "" {
|
||||||
err = http.ListenAndServe(bindAddr, nil)
|
err = sv.ListenAndServe()
|
||||||
} else {
|
} else {
|
||||||
err = http.ListenAndServeTLS(bindAddr, certFile, keyFile, nil)
|
err = sv.ListenAndServeTLS(certFile, keyFile)
|
||||||
}
|
}
|
||||||
if err != nil {
|
|
||||||
|
// if we were shut down by a signal, wait for Shutdown() to return
|
||||||
|
if isSignalled.Load() {
|
||||||
|
<-signalDone
|
||||||
|
}
|
||||||
|
|
||||||
|
switch err {
|
||||||
|
case nil, http.ErrServerClosed:
|
||||||
|
// OK
|
||||||
|
default:
|
||||||
fmt.Fprintln(os.Stderr, err)
|
fmt.Fprintln(os.Stderr, err)
|
||||||
os.Exit(1)
|
os.Exit(1)
|
||||||
}
|
}
|
||||||
|
@ -228,3 +311,53 @@ func (ah *addHeaders) ServeHTTP(w http.ResponseWriter, r *http.Request) {
|
||||||
}
|
}
|
||||||
ah.handler.ServeHTTP(w, r)
|
ah.handler.ServeHTTP(w, r)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// routeEntry is used within routerHandler to map a specific prefix to a
|
||||||
|
// specific handler.
|
||||||
|
type routeEntry struct {
|
||||||
|
// prefix is a path prefix with trailing "/" such as "/foo/".
|
||||||
|
prefix string
|
||||||
|
|
||||||
|
// handler for the request if prefix matches.
|
||||||
|
handler http.Handler
|
||||||
|
}
|
||||||
|
|
||||||
|
// routerHandler holds a list of routes sorted by longest-prefix-first.
|
||||||
|
type routerHandler struct {
|
||||||
|
// entries are the list of prefixes, with longest prefix strings first.
|
||||||
|
// The sorting ensures we can iterate through from the start and match
|
||||||
|
// "/dir/subdir/" in preference to just "/dir/".
|
||||||
|
entries []routeEntry
|
||||||
|
}
|
||||||
|
|
||||||
|
// AddRoute adds a new entry into the handler. It is not concurrency safe; the
|
||||||
|
// handler should not be in use.
|
||||||
|
func (rh *routerHandler) AddRoute(prefix string, handler http.Handler) {
|
||||||
|
if !strings.HasSuffix(prefix, "/") {
|
||||||
|
prefix += "/"
|
||||||
|
}
|
||||||
|
rh.entries = append(rh.entries, routeEntry{
|
||||||
|
prefix: prefix,
|
||||||
|
handler: handler,
|
||||||
|
})
|
||||||
|
sort.Slice(rh.entries, func(i, j int) bool {
|
||||||
|
l1, l2 := len(rh.entries[i].prefix), len(rh.entries[j].prefix)
|
||||||
|
if l1 > l2 {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
if l1 == l2 {
|
||||||
|
return rh.entries[i].prefix < rh.entries[j].prefix
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func (rh *routerHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
|
||||||
|
for _, entry := range rh.entries {
|
||||||
|
if strings.HasPrefix(r.URL.Path, entry.prefix) {
|
||||||
|
entry.handler.ServeHTTP(w, r)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
http.NotFound(w, r)
|
||||||
|
}
|
||||||
|
|
8
go.mod
8
go.mod
|
@ -1,6 +1,8 @@
|
||||||
module github.com/lwithers/htpack
|
module src.lwithers.me.uk/go/htpack
|
||||||
|
|
||||||
require (
|
require (
|
||||||
github.com/gogo/protobuf v1.2.1
|
github.com/gogo/protobuf v1.3.2
|
||||||
golang.org/x/sys v0.0.0-20190415081028-16da32be82c5
|
golang.org/x/sys v0.22.0
|
||||||
)
|
)
|
||||||
|
|
||||||
|
go 1.22
|
||||||
|
|
38
go.sum
38
go.sum
|
@ -1,7 +1,33 @@
|
||||||
github.com/gogo/protobuf v1.2.1 h1:/s5zKNz0uPFCZ5hddgPdo2TK2TVrUNMn0OOX8/aZMTE=
|
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
|
||||||
github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4=
|
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
|
||||||
github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q=
|
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
|
||||||
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
|
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
|
||||||
golang.org/x/sys v0.0.0-20190415081028-16da32be82c5 h1:UMbOtg4ZL2GyTAolLE9QfNvzskWvFkI935Z98i9moXA=
|
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
|
||||||
golang.org/x/sys v0.0.0-20190415081028-16da32be82c5/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
|
||||||
golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
|
||||||
|
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||||
|
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||||
|
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
|
||||||
|
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
|
||||||
|
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||||
|
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||||
|
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||||
|
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
|
||||||
|
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||||
|
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||||
|
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||||
|
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||||
|
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
|
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
|
golang.org/x/sys v0.22.0 h1:RI27ohtqKCnwULzJLqkv897zojh5/DwS/ENaMzUOaWI=
|
||||||
|
golang.org/x/sys v0.22.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
|
||||||
|
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||||
|
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||||
|
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||||
|
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||||
|
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
|
||||||
|
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
|
||||||
|
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||||
|
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||||
|
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||||
|
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||||
|
|
177
handler.go
177
handler.go
|
@ -1,18 +1,17 @@
|
||||||
package htpack
|
package htpack
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"net"
|
"fmt"
|
||||||
"net/http"
|
"net/http"
|
||||||
"os"
|
"os"
|
||||||
"path"
|
"path"
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
"strconv"
|
"strconv"
|
||||||
"strings"
|
"strings"
|
||||||
"syscall"
|
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/lwithers/htpack/packed"
|
|
||||||
"golang.org/x/sys/unix"
|
"golang.org/x/sys/unix"
|
||||||
|
"src.lwithers.me.uk/go/htpack/packed"
|
||||||
)
|
)
|
||||||
|
|
||||||
const (
|
const (
|
||||||
|
@ -20,14 +19,13 @@ const (
|
||||||
encodingBrotli = "br"
|
encodingBrotli = "br"
|
||||||
)
|
)
|
||||||
|
|
||||||
// TODO: logging
|
|
||||||
|
|
||||||
// New returns a new handler. Standard security headers are set.
|
// New returns a new handler. Standard security headers are set.
|
||||||
func New(packfile string) (*Handler, error) {
|
func New(packfile string) (*Handler, error) {
|
||||||
f, err := os.Open(packfile)
|
f, err := os.Open(packfile)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
defer f.Close()
|
||||||
|
|
||||||
fi, err := f.Stat()
|
fi, err := f.Stat()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
@ -36,19 +34,16 @@ func New(packfile string) (*Handler, error) {
|
||||||
mapped, err := unix.Mmap(int(f.Fd()), 0, int(fi.Size()),
|
mapped, err := unix.Mmap(int(f.Fd()), 0, int(fi.Size()),
|
||||||
unix.PROT_READ, unix.MAP_SHARED)
|
unix.PROT_READ, unix.MAP_SHARED)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
f.Close()
|
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
_, dir, err := packed.Load(f)
|
_, dir, err := packed.Load(f)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
unix.Munmap(mapped)
|
unix.Munmap(mapped)
|
||||||
f.Close()
|
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
h := &Handler{
|
h := &Handler{
|
||||||
f: f,
|
|
||||||
mapped: mapped,
|
mapped: mapped,
|
||||||
dir: dir.Files,
|
dir: dir.Files,
|
||||||
headers: make(map[string]string),
|
headers: make(map[string]string),
|
||||||
|
@ -56,7 +51,7 @@ func New(packfile string) (*Handler, error) {
|
||||||
}
|
}
|
||||||
|
|
||||||
// https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Frame-Options
|
// https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Frame-Options
|
||||||
h.SetHeader("X-Frame-Options", "sameorigin")
|
h.SetHeader("X-Frame-Options", "SAMEORIGIN")
|
||||||
|
|
||||||
// https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Content-Type-Options
|
// https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Content-Type-Options
|
||||||
h.SetHeader("X-Content-Type-Options", "nosniff")
|
h.SetHeader("X-Content-Type-Options", "nosniff")
|
||||||
|
@ -66,11 +61,11 @@ func New(packfile string) (*Handler, error) {
|
||||||
|
|
||||||
// Handler implements http.Handler and allows options to be set.
|
// Handler implements http.Handler and allows options to be set.
|
||||||
type Handler struct {
|
type Handler struct {
|
||||||
f *os.File
|
|
||||||
mapped []byte
|
mapped []byte
|
||||||
dir map[string]*packed.File
|
dir map[string]*packed.File
|
||||||
headers map[string]string
|
headers map[string]string
|
||||||
startTime time.Time
|
startTime time.Time
|
||||||
|
notFound *packed.File
|
||||||
}
|
}
|
||||||
|
|
||||||
// SetHeader allows a custom header to be set on HTTP responses. These are
|
// SetHeader allows a custom header to be set on HTTP responses. These are
|
||||||
|
@ -109,6 +104,28 @@ func (h *Handler) SetIndex(filename string) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// SetNotFound allows overriding the returned resource when a request is made
|
||||||
|
// for a resource that does not exist. The default behaviour would be to return
|
||||||
|
// a standard HTTP 404 Not Found response; calling this function with an empty
|
||||||
|
// string will restore that behaviour.
|
||||||
|
//
|
||||||
|
// This function will return an error if the named resource is not present in
|
||||||
|
// the packfile.
|
||||||
|
func (h *Handler) SetNotFound(notFound string) error {
|
||||||
|
if notFound == "" {
|
||||||
|
h.notFound = nil
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
notFound = path.Clean(notFound)
|
||||||
|
dir := h.dir[path.Clean(notFound)]
|
||||||
|
if dir == nil {
|
||||||
|
return fmt.Errorf("no such resource %q", notFound)
|
||||||
|
}
|
||||||
|
h.notFound = dir
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
// ServeHTTP handles requests for files. It supports GET and HEAD methods, with
|
// ServeHTTP handles requests for files. It supports GET and HEAD methods, with
|
||||||
// anything else returning a 405. Exact path matches are required, else a 404 is
|
// anything else returning a 405. Exact path matches are required, else a 404 is
|
||||||
// returned.
|
// returned.
|
||||||
|
@ -129,14 +146,18 @@ func (h *Handler) ServeHTTP(w http.ResponseWriter, req *http.Request) {
|
||||||
|
|
||||||
info := h.dir[path.Clean(req.URL.Path)]
|
info := h.dir[path.Clean(req.URL.Path)]
|
||||||
if info == nil {
|
if info == nil {
|
||||||
http.NotFound(w, req)
|
if h.notFound == nil {
|
||||||
return
|
http.NotFound(w, req)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
info = h.notFound
|
||||||
}
|
}
|
||||||
|
|
||||||
// set standard headers
|
// set standard headers
|
||||||
w.Header().Set("Vary", "Accept-Encoding")
|
w.Header().Set("Vary", "Accept-Encoding")
|
||||||
w.Header().Set("Etag", info.Etag)
|
w.Header().Set("Etag", info.Etag)
|
||||||
w.Header().Set("Content-Type", info.ContentType)
|
w.Header().Set("Content-Type", info.ContentType)
|
||||||
|
w.Header().Set("Accept-Ranges", "bytes")
|
||||||
|
|
||||||
// process etag / modtime
|
// process etag / modtime
|
||||||
if clientHasCachedVersion(info.Etag, h.startTime, req) {
|
if clientHasCachedVersion(info.Etag, h.startTime, req) {
|
||||||
|
@ -155,89 +176,29 @@ func (h *Handler) ServeHTTP(w http.ResponseWriter, req *http.Request) {
|
||||||
w.Header().Set("Content-Encoding", encodingGzip)
|
w.Header().Set("Content-Encoding", encodingGzip)
|
||||||
}
|
}
|
||||||
|
|
||||||
// TODO: Range
|
// range support (single-part ranges only)
|
||||||
|
// https://developer.mozilla.org/en-US/docs/Web/HTTP/Range_requests#Single_part_ranges
|
||||||
|
offset, length, isPartial := getFileRange(data, req)
|
||||||
|
if isPartial {
|
||||||
|
w.Header().Set("Content-Range", fmt.Sprintf("bytes %d-%d/%d",
|
||||||
|
offset, offset+length-1, data.Length))
|
||||||
|
}
|
||||||
|
|
||||||
// now we know exactly what we're writing, finalise HTTP header
|
// now we know exactly what we're writing, finalise HTTP header
|
||||||
w.Header().Set("Content-Length", strconv.FormatUint(data.Length, 10))
|
w.Header().Set("Content-Length", strconv.FormatUint(length, 10))
|
||||||
w.WriteHeader(http.StatusOK)
|
if isPartial {
|
||||||
|
w.WriteHeader(http.StatusPartialContent)
|
||||||
|
} else {
|
||||||
|
w.WriteHeader(http.StatusOK)
|
||||||
|
}
|
||||||
|
|
||||||
// send body (though not for HEAD)
|
// send body (though not for HEAD)
|
||||||
if req.Method == "HEAD" {
|
if req.Method == "HEAD" {
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
h.sendfile(w, data)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *Handler) sendfile(w http.ResponseWriter, data *packed.FileData) {
|
offset += data.Offset
|
||||||
hj, ok := w.(http.Hijacker)
|
w.Write(h.mapped[offset : offset+length])
|
||||||
if !ok {
|
|
||||||
// fallback
|
|
||||||
h.copyfile(w, data)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
conn, buf, err := hj.Hijack()
|
|
||||||
if err != nil {
|
|
||||||
// fallback
|
|
||||||
h.copyfile(w, data)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
tcp, ok := conn.(*net.TCPConn)
|
|
||||||
if !ok {
|
|
||||||
// fallback
|
|
||||||
h.copyfile(w, data)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
defer tcp.Close()
|
|
||||||
|
|
||||||
rawsock, err := tcp.SyscallConn()
|
|
||||||
if err == nil {
|
|
||||||
err = buf.Flush()
|
|
||||||
}
|
|
||||||
if err == nil {
|
|
||||||
// Since we're bypassing Read / Write, there is no integration
|
|
||||||
// with Go's epoll-driven event handling for this file
|
|
||||||
// descriptor. We'll therefore get EAGAIN behaviour rather
|
|
||||||
// than blocking for Sendfile(). Work around this by setting
|
|
||||||
// the file descriptor to blocking mode; since this function
|
|
||||||
// now guarantees (via defer tcp.Close()) that the connection
|
|
||||||
// will be closed and not be passed back to Go's own event
|
|
||||||
// loop, this is safe to do.
|
|
||||||
rawsock.Control(func(outfd uintptr) {
|
|
||||||
err = syscall.SetNonblock(int(outfd), false)
|
|
||||||
})
|
|
||||||
}
|
|
||||||
if err != nil {
|
|
||||||
// error only returned if the underlying connection is broken,
|
|
||||||
// so there's no point calling sendfile
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
off := int64(data.Offset)
|
|
||||||
remain := data.Length
|
|
||||||
for remain > 0 {
|
|
||||||
var amt int
|
|
||||||
if remain > (1 << 30) {
|
|
||||||
amt = (1 << 30)
|
|
||||||
} else {
|
|
||||||
amt = int(remain)
|
|
||||||
}
|
|
||||||
|
|
||||||
// TODO: outer error handling
|
|
||||||
|
|
||||||
rawsock.Control(func(outfd uintptr) {
|
|
||||||
amt, err = unix.Sendfile(int(outfd), int(h.f.Fd()), &off, amt)
|
|
||||||
})
|
|
||||||
remain -= uint64(amt)
|
|
||||||
if err != nil {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *Handler) copyfile(w http.ResponseWriter, data *packed.FileData) {
|
|
||||||
w.Write(h.mapped[data.Offset : data.Offset+data.Length])
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func acceptedEncodings(req *http.Request) (gzip, brotli bool) {
|
func acceptedEncodings(req *http.Request) (gzip, brotli bool) {
|
||||||
|
@ -280,3 +241,45 @@ func clientHasCachedVersion(etag string, startTime time.Time, req *http.Request,
|
||||||
}
|
}
|
||||||
return cachedTime.After(startTime)
|
return cachedTime.After(startTime)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// getFileRange returns the byte offset and length of the file to serve, along
|
||||||
|
// with whether or not it's partial content.
|
||||||
|
func getFileRange(data *packed.FileData, req *http.Request) (offset, length uint64, isPartial bool) {
|
||||||
|
length = data.Length
|
||||||
|
|
||||||
|
// only accept "Range: bytes=…"
|
||||||
|
r := req.Header.Get("Range")
|
||||||
|
if !strings.HasPrefix(r, "bytes=") {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
r = strings.TrimPrefix(r, "bytes=")
|
||||||
|
|
||||||
|
// only accept a single range, "from-to", mapping to interval [from,to]
|
||||||
|
pos := strings.IndexByte(r, '-')
|
||||||
|
if pos == -1 {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
sfrom, sto := r[:pos], r[pos+1:]
|
||||||
|
from, err := strconv.ParseUint(sfrom, 10, 64)
|
||||||
|
if err != nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
to, err := strconv.ParseUint(sto, 10, 64)
|
||||||
|
if err != nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// validate the interval lies within the file
|
||||||
|
switch {
|
||||||
|
case from > to,
|
||||||
|
from >= data.Length,
|
||||||
|
to >= data.Length:
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// all good
|
||||||
|
offset = from
|
||||||
|
length = to - from + 1
|
||||||
|
isPartial = true
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
|
@ -15,11 +15,15 @@
|
||||||
*/
|
*/
|
||||||
package packed
|
package packed
|
||||||
|
|
||||||
import proto "github.com/gogo/protobuf/proto"
|
import (
|
||||||
import fmt "fmt"
|
fmt "fmt"
|
||||||
import math "math"
|
|
||||||
|
|
||||||
import io "io"
|
proto "github.com/gogo/protobuf/proto"
|
||||||
|
|
||||||
|
math "math"
|
||||||
|
|
||||||
|
io "io"
|
||||||
|
)
|
||||||
|
|
||||||
// Reference imports to suppress errors if they are not otherwise used.
|
// Reference imports to suppress errors if they are not otherwise used.
|
||||||
var _ = proto.Marshal
|
var _ = proto.Marshal
|
||||||
|
|
Loading…
Reference in New Issue