summaryrefslogtreecommitdiffstats
path: root/parser
diff options
context:
space:
mode:
authorBjørn Erik Pedersen <bjorn.erik.pedersen@gmail.com>2023-12-24 19:11:05 +0100
committerBjørn Erik Pedersen <bjorn.erik.pedersen@gmail.com>2024-01-27 16:28:14 +0100
commit7285e74090852b5d52f25e577850fa75f4aa8573 (patch)
tree54d07cb4a7de2db5c89f2590266595f0aca6cbd6 /parser
parent5fd1e7490305570872d3899f5edda950903c5213 (diff)
all: Rework page store, add a dynacache, improve partial rebuilds, and some general spring cleaningdevelop2024
There are some breaking changes in this commit, see #11455. Closes #11455 Closes #11549 This fixes a set of bugs (see issue list) and it is also paying some technical debt accumulated over the years. We now build with Staticcheck enabled in the CI build. The performance should be about the same as before for regular sized Hugo sites, but it should perform and scale much better to larger data sets, as objects that uses lots of memory (e.g. rendered Markdown, big JSON files read into maps with transform.Unmarshal etc.) will now get automatically garbage collected if needed. Performance on partial rebuilds when running the server in fast render mode should be the same, but the change detection should be much more accurate. A list of the notable new features: * A new dependency tracker that covers (almost) all of Hugo's API and is used to do fine grained partial rebuilds when running the server. * A new and simpler tree document store which allows fast lookups and prefix-walking in all dimensions (e.g. language) concurrently. * You can now configure an upper memory limit allowing for much larger data sets and/or running on lower specced PCs. We have lifted the "no resources in sub folders" restriction for branch bundles (e.g. sections). Memory Limit * Hugos will, by default, set aside a quarter of the total system memory, but you can set this via the OS environment variable HUGO_MEMORYLIMIT (in gigabytes). This is backed by a partitioned LRU cache used throughout Hugo. A cache that gets dynamically resized in low memory situations, allowing Go's Garbage Collector to free the memory. New Dependency Tracker: Hugo has had a rule based coarse grained approach to server rebuilds that has worked mostly pretty well, but there have been some surprises (e.g. stale content). This is now revamped with a new dependency tracker that can quickly calculate the delta given a changed resource (e.g. a content file, template, JS file etc.). This handles transitive relations, e.g. $page -> js.Build -> JS import, or $page1.Content -> render hook -> site.GetPage -> $page2.Title, or $page1.Content -> shortcode -> partial -> site.RegularPages -> $page2.Content -> shortcode ..., and should also handle changes to aggregated values (e.g. site.Lastmod) effectively. This covers all of Hugo's API with 2 known exceptions (a list that may not be fully exhaustive): Changes to files loaded with template func os.ReadFile may not be handled correctly. We recommend loading resources with resources.Get Changes to Hugo objects (e.g. Page) passed in the template context to lang.Translate may not be detected correctly. We recommend having simple i18n templates without too much data context passed in other than simple types such as strings and numbers. Note that the cachebuster configuration (when A changes then rebuild B) works well with the above, but we recommend that you revise that configuration, as it in most situations should not be needed. One example where it is still needed is with TailwindCSS and using changes to hugo_stats.json to trigger new CSS rebuilds. Document Store: Previously, a little simplified, we split the document store (where we store pages and resources) in a tree per language. This worked pretty well, but the structure made some operations harder than they needed to be. We have now restructured it into one Radix tree for all languages. Internally the language is considered to be a dimension of that tree, and the tree can be viewed in all dimensions concurrently. This makes some operations re. language simpler (e.g. finding translations is just a slice range), but the idea is that it should also be relatively inexpensive to add more dimensions if needed (e.g. role). Fixes #10169 Fixes #10364 Fixes #10482 Fixes #10630 Fixes #10656 Fixes #10694 Fixes #10918 Fixes #11262 Fixes #11439 Fixes #11453 Fixes #11457 Fixes #11466 Fixes #11540 Fixes #11551 Fixes #11556 Fixes #11654 Fixes #11661 Fixes #11663 Fixes #11664 Fixes #11669 Fixes #11671 Fixes #11807 Fixes #11808 Fixes #11809 Fixes #11815 Fixes #11840 Fixes #11853 Fixes #11860 Fixes #11883 Fixes #11904 Fixes #7388 Fixes #7425 Fixes #7436 Fixes #7544 Fixes #7882 Fixes #7960 Fixes #8255 Fixes #8307 Fixes #8863 Fixes #8927 Fixes #9192 Fixes #9324
Diffstat (limited to 'parser')
-rw-r--r--parser/lowercase_camel_json.go13
-rw-r--r--parser/metadecoders/decoder.go24
-rw-r--r--parser/pageparser/pagelexer.go19
-rw-r--r--parser/pageparser/pagelexer_intro.go6
-rw-r--r--parser/pageparser/pageparser.go34
-rw-r--r--parser/pageparser/pageparser_intro_test.go54
-rw-r--r--parser/pageparser/pageparser_shortcode_test.go168
-rw-r--r--parser/pageparser/pageparser_test.go5
8 files changed, 178 insertions, 145 deletions
diff --git a/parser/lowercase_camel_json.go b/parser/lowercase_camel_json.go
index d48aa40c4..3dd4c24b0 100644
--- a/parser/lowercase_camel_json.go
+++ b/parser/lowercase_camel_json.go
@@ -25,8 +25,7 @@ import (
// Regexp definitions
var (
- keyMatchRegex = regexp.MustCompile(`\"(\w+)\":`)
- wordBarrierRegex = regexp.MustCompile(`(\w)([A-Z])`)
+ keyMatchRegex = regexp.MustCompile(`\"(\w+)\":`)
)
// Code adapted from https://gist.github.com/piersy/b9934790a8892db1a603820c0c23e4a7
@@ -92,19 +91,17 @@ func (c ReplacingJSONMarshaller) MarshalJSON() ([]byte, error) {
if !hreflect.IsTruthful(v) {
delete(m, k)
} else {
- switch v.(type) {
+ switch vv := v.(type) {
case map[string]interface{}:
- removeZeroVAlues(v.(map[string]any))
+ removeZeroVAlues(vv)
case []interface{}:
- for _, vv := range v.([]interface{}) {
- if m, ok := vv.(map[string]any); ok {
+ for _, vvv := range vv {
+ if m, ok := vvv.(map[string]any); ok {
removeZeroVAlues(m)
}
}
}
-
}
-
}
}
removeZeroVAlues(m)
diff --git a/parser/metadecoders/decoder.go b/parser/metadecoders/decoder.go
index 8d93d86a0..5dac23f03 100644
--- a/parser/metadecoders/decoder.go
+++ b/parser/metadecoders/decoder.go
@@ -174,22 +174,22 @@ func (d Decoder) UnmarshalTo(data []byte, f Format, v any) error {
// and change all maps to map[string]interface{} like we would've
// gotten from `json`.
var ptr any
- switch v.(type) {
+ switch vv := v.(type) {
case *map[string]any:
- ptr = *v.(*map[string]any)
+ ptr = *vv
case *any:
- ptr = *v.(*any)
+ ptr = *vv
default:
// Not a map.
}
if ptr != nil {
if mm, changed := stringifyMapKeys(ptr); changed {
- switch v.(type) {
+ switch vv := v.(type) {
case *map[string]any:
- *v.(*map[string]any) = mm.(map[string]any)
+ *vv = mm.(map[string]any)
case *any:
- *v.(*any) = mm
+ *vv = mm
}
}
}
@@ -218,9 +218,9 @@ func (d Decoder) unmarshalCSV(data []byte, v any) error {
return err
}
- switch v.(type) {
+ switch vv := v.(type) {
case *any:
- *v.(*any) = records
+ *vv = records
default:
return fmt.Errorf("CSV cannot be unmarshaled into %T", v)
@@ -257,11 +257,11 @@ func (d Decoder) unmarshalORG(data []byte, v any) error {
frontMatter[k] = v
}
}
- switch v.(type) {
+ switch vv := v.(type) {
case *map[string]any:
- *v.(*map[string]any) = frontMatter
- default:
- *v.(*any) = frontMatter
+ *vv = frontMatter
+ case *any:
+ *vv = frontMatter
}
return nil
}
diff --git a/parser/pageparser/pagelexer.go b/parser/pageparser/pagelexer.go
index 64cd4bfc1..bd903b771 100644
--- a/parser/pageparser/pagelexer.go
+++ b/parser/pageparser/pagelexer.go
@@ -50,6 +50,9 @@ type pageLexer struct {
// items delivered to client
items Items
+
+ // error delivered to the client
+ err error
}
// Implement the Result interface
@@ -164,7 +167,6 @@ func (l *pageLexer) emit(t ItemType) {
}
l.append(Item{Type: t, low: l.start, high: l.pos})
-
}
// sends a string item back to the client.
@@ -210,7 +212,6 @@ func (l *pageLexer) ignoreEscapesAndEmit(t ItemType, isString bool) {
}
l.start = l.pos
-
}
// gets the current value (for debugging and error handling)
@@ -227,7 +228,14 @@ var lf = []byte("\n")
// nil terminates the parser
func (l *pageLexer) errorf(format string, args ...any) stateFunc {
- l.append(Item{Type: tError, Err: fmt.Errorf(format, args...)})
+ l.append(Item{Type: tError, Err: fmt.Errorf(format, args...), low: l.start, high: l.pos})
+ return nil
+}
+
+// documentError can be used to signal a fatal error in the lexing process.
+// nil terminates the parser
+func (l *pageLexer) documentError(err error) stateFunc {
+ l.err = err
return nil
}
@@ -465,6 +473,7 @@ func lexDone(l *pageLexer) stateFunc {
return nil
}
+//lint:ignore U1000 useful for debugging
func (l *pageLexer) printCurrentInput() {
fmt.Printf("input[%d:]: %q", l.pos, string(l.input[l.pos:]))
}
@@ -475,10 +484,6 @@ func (l *pageLexer) index(sep []byte) int {
return bytes.Index(l.input[l.pos:], sep)
}
-func (l *pageLexer) indexByte(sep byte) int {
- return bytes.IndexByte(l.input[l.pos:], sep)
-}
-
func (l *pageLexer) hasPrefix(prefix []byte) bool {
return bytes.HasPrefix(l.input[l.pos:], prefix)
}
diff --git a/parser/pageparser/pagelexer_intro.go b/parser/pageparser/pagelexer_intro.go
index 6e4617998..25af4170b 100644
--- a/parser/pageparser/pagelexer_intro.go
+++ b/parser/pageparser/pagelexer_intro.go
@@ -13,6 +13,10 @@
package pageparser
+import "errors"
+
+var ErrPlainHTMLDocumentsNotSupported = errors.New("plain HTML documents not supported")
+
func lexIntroSection(l *pageLexer) stateFunc {
l.summaryDivider = summaryDivider
@@ -45,7 +49,7 @@ LOOP:
l.emit(TypeIgnore)
continue LOOP
} else {
- return l.errorf("plain HTML documents not supported")
+ return l.documentError(ErrPlainHTMLDocumentsNotSupported)
}
}
break LOOP
diff --git a/parser/pageparser/pageparser.go b/parser/pageparser/pageparser.go
index 8d4c757af..9e8b6d803 100644
--- a/parser/pageparser/pageparser.go
+++ b/parser/pageparser/pageparser.go
@@ -34,9 +34,22 @@ type Result interface {
var _ Result = (*pageLexer)(nil)
-// Parse parses the page in the given reader according to the given Config.
-func Parse(r io.Reader, cfg Config) (Result, error) {
- return parseSection(r, cfg, lexIntroSection)
+// ParseBytes parses the page in b according to the given Config.
+func ParseBytes(b []byte, cfg Config) (Items, error) {
+ l, err := parseBytes(b, cfg, lexIntroSection)
+ if err != nil {
+ return nil, err
+ }
+ return l.items, l.err
+}
+
+// ParseBytesMain parses b starting with the main section.
+func ParseBytesMain(b []byte, cfg Config) (Items, error) {
+ l, err := parseBytes(b, cfg, lexMainSection)
+ if err != nil {
+ return nil, err
+ }
+ return l.items, l.err
}
type ContentFrontMatter struct {
@@ -50,24 +63,29 @@ type ContentFrontMatter struct {
func ParseFrontMatterAndContent(r io.Reader) (ContentFrontMatter, error) {
var cf ContentFrontMatter
- psr, err := Parse(r, Config{})
+ input, err := io.ReadAll(r)
+ if err != nil {
+ return cf, fmt.Errorf("failed to read page content: %w", err)
+ }
+
+ psr, err := ParseBytes(input, Config{})
if err != nil {
return cf, err
}
var frontMatterSource []byte
- iter := psr.Iterator()
+ iter := NewIterator(psr)
walkFn := func(item Item) bool {
if frontMatterSource != nil {
// The rest is content.
- cf.Content = psr.Input()[item.low:]
+ cf.Content = input[item.low:]
// Done
return false
} else if item.IsFrontMatter() {
cf.FrontMatterFormat = FormatFromFrontMatterType(item.Type)
- frontMatterSource = item.Val(psr.Input())
+ frontMatterSource = item.Val(input)
}
return true
}
@@ -106,7 +124,7 @@ func parseSection(r io.Reader, cfg Config, start stateFunc) (Result, error) {
return parseBytes(b, cfg, start)
}
-func parseBytes(b []byte, cfg Config, start stateFunc) (Result, error) {
+func parseBytes(b []byte, cfg Config, start stateFunc) (*pageLexer, error) {
lexer := newPageLexer(b, start, cfg)
lexer.run()
return lexer, nil
diff --git a/parser/pageparser/pageparser_intro_test.go b/parser/pageparser/pageparser_intro_test.go
index 1b2d59ccc..df2f2579b 100644
--- a/parser/pageparser/pageparser_intro_test.go
+++ b/parser/pageparser/pageparser_intro_test.go
@@ -25,6 +25,7 @@ type lexerTest struct {
name string
input string
items []typeText
+ err error
}
type typeText struct {
@@ -58,34 +59,40 @@ var crLfReplacer = strings.NewReplacer("\r", "#", "\n", "$")
// TODO(bep) a way to toggle ORG mode vs the rest.
var frontMatterTests = []lexerTest{
- {"empty", "", []typeText{tstEOF}},
- {"Byte order mark", "\ufeff\nSome text.\n", []typeText{nti(TypeIgnore, "\ufeff"), tstSomeText, tstEOF}},
- {"HTML Document", ` <html> `, []typeText{nti(tError, "plain HTML documents not supported")}},
- {"HTML Document with shortcode", `<html>{{< sc1 >}}</html>`, []typeText{nti(tError, "plain HTML documents not supported")}},
- {"No front matter", "\nSome text.\n", []typeText{tstSomeText, tstEOF}},
- {"YAML front matter", "---\nfoo: \"bar\"\n---\n\nSome text.\n", []typeText{tstFrontMatterYAML, tstSomeText, tstEOF}},
- {"YAML empty front matter", "---\n---\n\nSome text.\n", []typeText{nti(TypeFrontMatterYAML, ""), tstSomeText, tstEOF}},
- {"YAML commented out front matter", "<!--\n---\nfoo: \"bar\"\n---\n-->\nSome text.\n", []typeText{nti(TypeIgnore, "<!--\n"), tstFrontMatterYAML, nti(TypeIgnore, "-->"), tstSomeText, tstEOF}},
- {"YAML commented out front matter, no end", "<!--\n---\nfoo: \"bar\"\n---\nSome text.\n", []typeText{nti(TypeIgnore, "<!--\n"), tstFrontMatterYAML, nti(tError, "starting HTML comment with no end")}},
+ {"empty", "", []typeText{tstEOF}, nil},
+ {"Byte order mark", "\ufeff\nSome text.\n", []typeText{nti(TypeIgnore, "\ufeff"), tstSomeText, tstEOF}, nil},
+ {"HTML Document", ` <html> `, nil, ErrPlainHTMLDocumentsNotSupported},
+ {"HTML Document with shortcode", `<html>{{< sc1 >}}</html>`, nil, ErrPlainHTMLDocumentsNotSupported},
+ {"No front matter", "\nSome text.\n", []typeText{tstSomeText, tstEOF}, nil},
+ {"YAML front matter", "---\nfoo: \"bar\"\n---\n\nSome text.\n", []typeText{tstFrontMatterYAML, tstSomeText, tstEOF}, nil},
+ {"YAML empty front matter", "---\n---\n\nSome text.\n", []typeText{nti(TypeFrontMatterYAML, ""), tstSomeText, tstEOF}, nil},
+ {"YAML commented out front matter", "<!--\n---\nfoo: \"bar\"\n---\n-->\nSome text.\n", []typeText{nti(TypeIgnore, "<!--\n"), tstFrontMatterYAML, nti(TypeIgnore, "-->"), tstSomeText, tstEOF}, nil},
+ {"YAML commented out front matter, no end", "<!--\n---\nfoo: \"bar\"\n---\nSome text.\n", []typeText{nti(TypeIgnore, "<!--\n"), tstFrontMatterYAML, nti(tError, "starting HTML comment with no end")}, nil},
// Note that we keep all bytes as they are, but we need to handle CRLF
- {"YAML front matter CRLF", "---\r\nfoo: \"bar\"\r\n---\n\nSome text.\n", []typeText{tstFrontMatterYAMLCRLF, tstSomeText, tstEOF}},
- {"TOML front matter", "+++\nfoo = \"bar\"\n+++\n\nSome text.\n", []typeText{tstFrontMatterTOML, tstSomeText, tstEOF}},
- {"JSON front matter", tstJSON + "\r\n\nSome text.\n", []typeText{tstFrontMatterJSON, tstSomeText, tstEOF}},
- {"ORG front matter", tstORG + "\nSome text.\n", []typeText{tstFrontMatterORG, tstSomeText, tstEOF}},
- {"Summary divider ORG", tstORG + "\nSome text.\n# more\nSome text.\n", []typeText{tstFrontMatterORG, tstSomeText, nti(TypeLeadSummaryDivider, "# more\n"), nti(tText, "Some text.\n"), tstEOF}},
- {"Summary divider", "+++\nfoo = \"bar\"\n+++\n\nSome text.\n<!--more-->\nSome text.\n", []typeText{tstFrontMatterTOML, tstSomeText, tstSummaryDivider, nti(tText, "Some text.\n"), tstEOF}},
- {"Summary divider same line", "+++\nfoo = \"bar\"\n+++\n\nSome text.<!--more-->Some text.\n", []typeText{tstFrontMatterTOML, nti(tText, "\nSome text."), nti(TypeLeadSummaryDivider, "<!--more-->"), nti(tText, "Some text.\n"), tstEOF}},
+ {"YAML front matter CRLF", "---\r\nfoo: \"bar\"\r\n---\n\nSome text.\n", []typeText{tstFrontMatterYAMLCRLF, tstSomeText, tstEOF}, nil},
+ {"TOML front matter", "+++\nfoo = \"bar\"\n+++\n\nSome text.\n", []typeText{tstFrontMatterTOML, tstSomeText, tstEOF}, nil},
+ {"JSON front matter", tstJSON + "\r\n\nSome text.\n", []typeText{tstFrontMatterJSON, tstSomeText, tstEOF}, nil},
+ {"ORG front matter", tstORG + "\nSome text.\n", []typeText{tstFrontMatterORG, tstSomeText, tstEOF}, nil},
+ {"Summary divider ORG", tstORG + "\nSome text.\n# more\nSome text.\n", []typeText{tstFrontMatterORG, tstSomeText, nti(TypeLeadSummaryDivider, "# more\n"), nti(tText, "Some text.\n"), tstEOF}, nil},
+ {"Summary divider", "+++\nfoo = \"bar\"\n+++\n\nSome text.\n<!--more-->\nSome text.\n", []typeText{tstFrontMatterTOML, tstSomeText, tstSummaryDivider, nti(tText, "Some text.\n"), tstEOF}, nil},
+ {"Summary divider same line", "+++\nfoo = \"bar\"\n+++\n\nSome text.<!--more-->Some text.\n", []typeText{tstFrontMatterTOML, nti(tText, "\nSome text."), nti(TypeLeadSummaryDivider, "<!--more-->"), nti(tText, "Some text.\n"), tstEOF}, nil},
// https://github.com/gohugoio/hugo/issues/5402
- {"Summary and shortcode, no space", "+++\nfoo = \"bar\"\n+++\n\nSome text.\n<!--more-->{{< sc1 >}}\nSome text.\n", []typeText{tstFrontMatterTOML, tstSomeText, nti(TypeLeadSummaryDivider, "<!--more-->"), tstLeftNoMD, tstSC1, tstRightNoMD, tstSomeText, tstEOF}},
+ {"Summary and shortcode, no space", "+++\nfoo = \"bar\"\n+++\n\nSome text.\n<!--more-->{{< sc1 >}}\nSome text.\n", []typeText{tstFrontMatterTOML, tstSomeText, nti(TypeLeadSummaryDivider, "<!--more-->"), tstLeftNoMD, tstSC1, tstRightNoMD, tstSomeText, tstEOF}, nil},
// https://github.com/gohugoio/hugo/issues/5464
- {"Summary and shortcode only", "+++\nfoo = \"bar\"\n+++\n{{< sc1 >}}\n<!--more-->\n{{< sc2 >}}", []typeText{tstFrontMatterTOML, tstLeftNoMD, tstSC1, tstRightNoMD, tstNewline, tstSummaryDivider, tstLeftNoMD, tstSC2, tstRightNoMD, tstEOF}},
+ {"Summary and shortcode only", "+++\nfoo = \"bar\"\n+++\n{{< sc1 >}}\n<!--more-->\n{{< sc2 >}}", []typeText{tstFrontMatterTOML, tstLeftNoMD, tstSC1, tstRightNoMD, tstNewline, tstSummaryDivider, tstLeftNoMD, tstSC2, tstRightNoMD, tstEOF}, nil},
}
func TestFrontMatter(t *testing.T) {
t.Parallel()
c := qt.New(t)
for i, test := range frontMatterTests {
- items := collect([]byte(test.input), false, lexIntroSection)
+ items, err := collect([]byte(test.input), false, lexIntroSection)
+ if err != nil {
+ c.Assert(err, qt.Equals, test.err)
+ continue
+ } else {
+ c.Assert(test.err, qt.IsNil)
+ }
if !equal(test.input, items, test.items) {
got := itemsToString(items, []byte(test.input))
expected := testItemsToString(test.items)
@@ -124,12 +131,15 @@ func testItemsToString(items []typeText) string {
return crLfReplacer.Replace(sb.String())
}
-func collectWithConfig(input []byte, skipFrontMatter bool, stateStart stateFunc, cfg Config) (items []Item) {
+func collectWithConfig(input []byte, skipFrontMatter bool, stateStart stateFunc, cfg Config) (items []Item, err error) {
l := newPageLexer(input, stateStart, cfg)
l.run()
iter := NewIterator(l.items)
for {
+ if l.err != nil {
+ return nil, l.err
+ }
item := iter.Next()
items = append(items, item)
if item.Type == tEOF || item.Type == tError {
@@ -139,13 +149,13 @@ func collectWithConfig(input []byte, skipFrontMatter bool, stateStart stateFunc,
return
}
-func collect(input []byte, skipFrontMatter bool, stateStart stateFunc) (items []Item) {
+func collect(input []byte, skipFrontMatter bool, stateStart stateFunc) (items []Item, err error) {
var cfg Config
return collectWithConfig(input, skipFrontMatter, stateStart, cfg)
}
-func collectStringMain(input string) []Item {
+func collectStringMain(input string) ([]Item, error) {
return collect([]byte(input), true, lexMainSection)
}
diff --git a/parser/pageparser/pageparser_shortcode_test.go b/parser/pageparser/pageparser_shortcode_test.go
index 26d836e32..327da30ee 100644
--- a/parser/pageparser/pageparser_shortcode_test.go
+++ b/parser/pageparser/pageparser_shortcode_test.go
@@ -20,46 +20,42 @@ import (
)
var (
- tstEOF = nti(tEOF, "")
- tstLeftNoMD = nti(tLeftDelimScNoMarkup, "{{<")
- tstRightNoMD = nti(tRightDelimScNoMarkup, ">}}")
- tstLeftMD = nti(tLeftDelimScWithMarkup, "{{%")
- tstRightMD = nti(tRightDelimScWithMarkup, "%}}")
- tstSCClose = nti(tScClose, "/")
- tstSC1 = nti(tScName, "sc1")
- tstSC1Inline = nti(tScNameInline, "sc1.inline")
- tstSC2Inline = nti(tScNameInline, "sc2.inline")
- tstSC2 = nti(tScName, "sc2")
- tstSC3 = nti(tScName, "sc3")
- tstSCSlash = nti(tScName, "sc/sub")
- tstParam1 = nti(tScParam, "param1")
- tstParam2 = nti(tScParam, "param2")
- tstParamBoolTrue = nti(tScParam, "true")
- tstParamBoolFalse = nti(tScParam, "false")
- tstParamInt = nti(tScParam, "32")
- tstParamFloat = nti(tScParam, "3.14")
- tstVal = nti(tScParamVal, "Hello World")
- tstText = nti(tText, "Hello World")
+ tstEOF = nti(tEOF, "")
+ tstLeftNoMD = nti(tLeftDelimScNoMarkup, "{{<")
+ tstRightNoMD = nti(tRightDelimScNoMarkup, ">}}")
+ tstLeftMD = nti(tLeftDelimScWithMarkup, "{{%")
+ tstRightMD = nti(tRightDelimScWithMarkup, "%}}")
+ tstSCClose = nti(tScClose, "/")
+ tstSC1 = nti(tScName, "sc1")
+ tstSC1Inline = nti(tScNameInline, "sc1.inline")
+ tstSC2Inline = nti(tScNameInline, "sc2.inline")
+ tstSC2 = nti(tScName, "sc2")
+ tstSC3 = nti(tScName, "sc3")
+ tstSCSlash = nti(tScName, "sc/sub")
+ tstParam1 = nti(tScParam, "param1")
+ tstParam2 = nti(tScParam, "param2")
+ tstVal = nti(tScParamVal, "Hello World")
+ tstText = nti(tText, "Hello World")
)
var shortCodeLexerTests = []lexerTest{
- {"empty", "", []typeText{tstEOF}},
- {"spaces", " \t\n", []typeText{nti(tText, " \t\n"), tstEOF}},
- {"text", `to be or not`, []typeText{nti(tText, "to be or not"), tstEOF}},
- {"no markup", `{{< sc1 >}}`, []typeText{tstLeftNoMD, tstSC1, tstRightNoMD, tstEOF}},
- {"with EOL", "{{< sc1 \n >}}", []typeText{tstLeftNoMD, tstSC1, tstRightNoMD, tstEOF}},
+ {"empty", "", []typeText{tstEOF}, nil},
+ {"spaces", " \t\n", []typeText{nti(tText, " \t\n"), tstEOF}, nil},
+ {"text", `to be or not`, []typeText{nti(tText, "to be or not"), tstEOF}, nil},
+ {"no markup", `{{< sc1 >}}`, []typeText{tstLeftNoMD, tstSC1, tstRightNoMD, tstEOF}, nil},
+ {"with EOL", "{{< sc1 \n >}}", []typeText{tstLeftNoMD, tstSC1, tstRightNoMD, tstEOF}, nil},
- {"forward slash inside name", `{{< sc/sub >}}`, []typeText{tstLeftNoMD, tstSCSlash, tstRightNoMD, tstEOF}},
+ {"forward slash inside name", `{{< sc/sub >}}`, []typeText{tstLeftNoMD, tstSCSlash, tstRightNoMD, tstEOF}, nil},
- {"simple with markup", `{{% sc1 %}}`, []typeText{tstLeftMD, tstSC1, tstRightMD, tstEOF}},
- {"with spaces", `{{< sc1 >}}`, []typeText{tstLeftNoMD, tstSC1, tstRightNoMD, tstEOF}},
- {"indented on new line", "Hello\n {{% sc1 %}}", []typeText{nti(tText, "Hello\n"), nti(tIndentation, " "), tstLeftMD, tstSC1, tstRightMD, tstEOF}},
- {"indented on new line tab", "Hello\n\t{{% sc1 %}}", []typeText{nti(tText, "Hello\n"), nti(tIndentation, "\t"), tstLeftMD, tstSC1, tstRightMD, tstEOF}},
- {"indented on first line", " {{% sc1 %}}", []typeText{nti(tIndentation, " "), tstLeftMD, tstSC1, tstRightMD, tstEOF}},
+ {"simple with markup", `{{% sc1 %}}`, []typeText{tstLeftMD, tstSC1, tstRightMD, tstEOF}, nil},
+ {"with spaces", `{{< sc1 >}}`, []typeText{tstLeftNoMD, tstSC1, tstRightNoMD, tstEOF}, nil},
+ {"indented on new line", "Hello\n {{% sc1 %}}", []typeText{nti(tText, "Hello\n"), nti(tIndentation, " "), tstLeftMD, tstSC1, tstRightMD, tstEOF}, nil},
+ {"indented on new line tab", "Hello\n\t{{% sc1 %}}", []typeText{nti(tText, "Hello\n"), nti(tIndentation, "\t"), tstLeftMD, tstSC1, tstRightMD, tstEOF}, nil},
+ {"indented on first line", " {{% sc1 %}}", []typeText{nti(tIndentation, " "), tstLeftMD, tstSC1, tstRightMD, tstEOF}, nil},
{"mismatched rightDelim", `{{< sc1 %}}`, []typeText{
tstLeftNoMD, tstSC1,
nti(tError, "unrecognized character in shortcode action: U+0025 '%'. Note: Parameters with non-alphanumeric args must be quoted"),
- }},
+ }, nil},
{"inner, markup", `{{% sc1 %}} inner {{% /sc1 %}}`, []typeText{
tstLeftMD,
tstSC1,
@@ -70,79 +66,79 @@ var shortCodeLexerTests = []lexerTest{
tstSC1,
tstRightMD,
tstEOF,
- }},
+ }, nil},
{"close, but no open", `{{< /sc1 >}}`, []typeText{
tstLeftNoMD, nti(tError, "got closing shortcode, but none is open"),
- }},
+ }, nil},
{"close wrong", `{{< sc1 >}}{{< /another >}}`, []typeText{
tstLeftNoMD, tstSC1, tstRightNoMD, tstLeftNoMD, tstSCClose,
nti(tError, "closing tag for shortcode 'another' does not match start tag"),
- }},
+ }, nil},
{"close, but no open, more", `{{< sc1 >}}{{< /sc1 >}}{{< /another >}}`, []typeText{
tstLeftNoMD, tstSC1, tstRightNoMD, tstLeftNoMD, tstSCClose, tstSC1, tstRightNoMD, tstLeftNoMD, tstSCClose,
nti(tError, "closing tag for shortcode 'another' does not match start tag"),
- }},
+ }, nil},
{"close with extra keyword", `{{< sc1 >}}{{< /sc1 keyword>}}`, []typeText{
tstLeftNoMD, tstSC1, tstRightNoMD, tstLeftNoMD, tstSCClose, tstSC1,
nti(tError, "unclosed shortcode"),
- }},
+ }, nil},
{"float param, positional", `{{< sc1 3.14 >}}`, []typeText{
tstLeftNoMD, tstSC1, nti(tScParam, "3.14"), tstRightNoMD, tstEOF,
- }},
+ }, nil},
{"float param, named", `{{< sc1 param1=3.14 >}}`, []typeText{
tstLeftNoMD, tstSC1, tstParam1, nti(tScParamVal, "3.14"), tstRightNoMD, tstEOF,
- }},
+ }, nil},
{"named param, raw string", `{{< sc1 param1=` + "`" + "Hello World" + "`" + " >}}", []typeText{
tstLeftNoMD, tstSC1, tstParam1, nti(tScParamVal, "Hello World"), tstRightNoMD, tstEOF,
- }},
+ }, nil},
{"float param, named, space before", `{{< sc1 param1= 3.14 >}}`, []typeText{
tstLeftNoMD, tstSC1, tstParam1, nti(tScParamVal, "3.14"), tstRightNoMD, tstEOF,
- }},
+ }, nil},
{"Youtube id", `{{< sc1 -ziL-Q_456igdO-4 >}}`, []typeText{
tstLeftNoMD, tstSC1, nti(tScParam, "-ziL-Q_456igdO-4"), tstRightNoMD, tstEOF,
- }},
+ }, nil},
{"non-alphanumerics param quoted", `{{< sc1 "-ziL-.%QigdO-4" >}}`, []typeText{
tstLeftNoMD, tstSC1, nti(tScParam, "-ziL-.%QigdO-4"), tstRightNoMD, tstEOF,
- }},
+ }, nil},
{"raw string", `{{< sc1` + "`" + "Hello World" + "`" + ` >}}`, []typeText{
tstLeftNoMD, tstSC1, nti(tScParam, "Hello World"), tstRightNoMD, tstEOF,
- }},
+ }, nil},
{"raw string with newline", `{{< sc1` + "`" + `Hello
World` + "`" + ` >}}`, []typeText{
tstLeftNoMD, tstSC1, nti(tScParam, `Hello
World`), tstRightNoMD, tstEOF,
- }},
+ }, nil},
{"raw string with escape character", `{{< sc1` + "`" + `Hello \b World` + "`" + ` >}}`, []typeText{
tstLeftNoMD, tstSC1, nti(tScParam, `Hello \b World`), tstRightNoMD, tstEOF,
- }},
+ }, nil},
{"two params", `{{< sc1 param1 param2 >}}`, []typeText{
tstLeftNoMD, tstSC1, tstParam1, tstParam2, tstRightNoMD, tstEOF,
- }},
+ }, nil},
// issue #934
{"self-closing", `{{< sc1 />}}`, []typeText{
tstLeftNoMD, tstSC1, tstSCClose, tstRightNoMD, tstEOF,
- }},
+ }, nil},
// Issue 2498
{"multiple self-closing", `{{< sc1 />}}{{< sc1 />}}`, []typeText{
tstLeftNoMD, tstSC1, tstSCClose, tstRightNoMD,
tstLeftNoMD, tstSC1, tstSCClose, tstRightNoMD, tstEOF,
- }},
+ }, nil},
{"self-closing with param", `{{< sc1 param1 />}}`, []typeText{
tstLeftNoMD, tstSC1, tstParam1, tstSCClose, tstRightNoMD, tstEOF,
- }},
+ }, nil},
{"multiple self-closing with param", `{{< sc1 param1 />}}{{< sc1 param1 />}}`, []typeText{
tstLeftNoMD, tstSC1, tstParam1, tstSCClose, tstRightNoMD,
tstLeftNoMD, tstSC1, tstParam1, tstSCClose, tstRightNoMD, tstEOF,
- }},
+ }, nil},
{"multiple different self-closing with param", `{{< sc1 param1 />}}{{< sc2 param1 />}}`, []typeText{
tstLeftNoMD, tstSC1, tstParam1, tstSCClose, tstRightNoMD,
tstLeftNoMD, tstSC2, tstParam1, tstSCClose, tstRightNoMD, tstEOF,
- }},
+ }, nil},
{"nested simple", `{{< sc1 >}}{{< sc2 >}}{{< /sc1 >}}`, []typeText{
tstLeftNoMD, tstSC1, tstRightNoMD,
tstLeftNoMD, tstSC2, tstRightNoMD,
tstLeftNoMD, tstSCClose, tstSC1, tstRightNoMD, tstEOF,
- }},
+ }, nil},
{"nested complex", `{{< sc1 >}}ab{{% sc2 param1 %}}cd{{< sc3 >}}ef{{< /sc3 >}}gh{{% /sc2 %}}ij{{< /sc1 >}}kl`, []typeText{
tstLeftNoMD, tstSC1, tstRightNoMD,
nti(tText, "ab"),
@@ -156,30 +152,31 @@ var shortCodeLexerTests = []lexerTest{
nti(tText, "ij"),
tstLeftNoMD, tstSCClose, tstSC1, tstRightNoMD,
nti(tText, "kl"), tstEOF,
- }},
+ }, nil},
{"two quoted params", `{{< sc1 "param nr. 1" "param nr. 2" >}}`, []typeText{
tstLeftNoMD, tstSC1, nti(tScParam, "param nr. 1"), nti(tScParam, "param nr. 2"), tstRightNoMD, tstEOF,
- }},
+ }, nil},
{"two named params", `{{< sc1 param1="Hello World" param2="p2Val">}}`, []typeText{
tstLeftNoMD, tstSC1, tstParam1, tstVal, tstParam2, nti(tScParamVal, "p2Val"), tstRightNoMD, tstEOF,
- }},
+ }, nil},
{"escaped quotes", `{{< sc1 param1=\"Hello World\" >}}`, []typeText{
tstLeftNoMD, tstSC1, tstParam1, tstVal, tstRightNoMD, tstEOF,
- }},
+ }, nil},
{"escaped quotes, positional param", `{{< sc1 \"param1\" >}}`, []typeText{
tstLeftNoMD, tstSC1, tstParam1, tstRightNoMD, tstEOF,
- }},
+ }, nil},
{"escaped quotes inside escaped quotes", `{{< sc1 param1=\"Hello \"escaped\" World\" >}}`, []typeText{
tstLeftNoMD, tstSC1, tstParam1,
nti(tScParamVal, `Hello `), nti(tError, `got positional parameter 'escaped'. Cannot mix named and positional parameters`),
- }},
+ }, nil},
{
"escaped quotes inside nonescaped quotes",
`{{< sc1 param1="Hello \"escaped\" World" >}}`,
[]typeText{
tstLeftNoMD, tstSC1, tstParam1, nti(tScParamVal, `Hello "escaped" World`), tstRightNoMD, tstEOF,
},
+ nil,
},
{
"escaped quotes inside nonescaped quotes in positional param",
@@ -187,68 +184,69 @@ var shortCodeLexerTests = []lexerTest{
[]typeText{
tstLeftNoMD, tstSC1, nti(tScParam, `Hello "escaped" World`), tstRightNoMD, tstEOF,
},
+ nil,
},
{"escaped raw string, named param", `{{< sc1 param1=` + `\` + "`" + "Hello World" + `\` + "`" + ` >}}`, []typeText{
tstLeftNoMD, tstSC1, tstParam1, nti(tError, "unrecognized escape character"),
- }},
+ }, nil},
{"escaped raw string, positional param", `{{< sc1 param1 ` + `\` + "`" + "Hello World" + `\` + "`" + ` >}}`, []typeText{
tstLeftNoMD, tstSC1, tstParam1, nti(tError, "unrecognized escape character"),
- }},
+ }, nil},
{"two raw string params", `{{< sc1` + "`" + "Hello World" + "`" + "`" + "Second Param" + "`" + ` >}}`, []typeText{
tstLeftNoMD, tstSC1, nti(tScParam, "Hello World"), nti(tScParam, "Second Param"), tstRightNoMD, tstEOF,
- }},
+ }, nil},
{"unterminated quote", `{{< sc1 param2="Hello World>}}`, []typeText{
tstLeftNoMD, tstSC1, tstParam2, nti(tError, "unterminated quoted string in shortcode parameter-argument: 'Hello World>}}'"),
- }},
+ }, nil},
{"unterminated raw string", `{{< sc1` + "`" + "Hello World" + ` >}}`, []typeText{
tstLeftNoMD, tstSC1, nti(tError, "unterminated raw string in shortcode parameter-argument: 'Hello World >}}'"),
- }},
+ }, nil},
{"unterminated raw string in second argument", `{{< sc1` + "`" + "Hello World" + "`" + "`" + "Second Param" + ` >}}`, []typeText{
tstLeftNoMD, tstSC1, nti(tScParam, "Hello World"), nti(tError, "unterminated raw string in shortcode parameter-argument: 'Second Param >}}'"),
- }},
+ }, nil},
{"one named param, one not", `{{< sc1 param1="Hello World" p2 >}}`, []typeText{
tstLeftNoMD, tstSC1, tstParam1, tstVal,
nti(tError, "got positional parameter 'p2'. Cannot mix named and positional parameters"),
- }},
+ }, nil},
{"one named param, one quoted positional param, both raw strings", `{{< sc1 param1=` + "`" + "Hello World" + "`" + "`" + "Second Param" + "`" + ` >}}`, []typeText{
tstLeftNoMD, tstSC1, tstParam1, tstVal,
nti(tError, "got quoted positional parameter. Cannot mix named and positional parameters"),
- }},
+ }, nil},
{"one named param, one quoted positional param", `{{< sc1 param1="Hello World" "And Universe" >}}`, []typeText{