Currently job artifacts in CI/CD pipelines on LRZ GitLab never expire. Starting from Wed 26.1.2022 the default expiration time will be 30 days (GitLab default). Currently existing artifacts in already completed jobs will not be affected by the change. The latest artifacts for all jobs in the latest successful pipelines will be kept. More information: https://gitlab.lrz.de/help/user/admin_area/settings/continuous_integration.html#default-artifacts-expiration

Commit 4f8660ce authored by eckhart's avatar eckhart
Browse files

- Arithemtic examples refactored and commented

parent 7170a26a
DHParser Version 0.8.6 (date ?) DHParser Version 0.8.6 (3.3.2019)
............................... .................................
- default configuration now centralized in DHParser/configuration.py - default configuration now centralized in DHParser/configuration.py
......
...@@ -20,6 +20,9 @@ built with DHParser. See ...@@ -20,6 +20,9 @@ built with DHParser. See
https://gitlab.lrz.de/badw-it/MLW-DSL/tree/master/VSCode for one such https://gitlab.lrz.de/badw-it/MLW-DSL/tree/master/VSCode for one such
example. example.
Furthermore, documentation and documented case studies of projects
realized with DHParser would be very useful.
In case you are interested in getting deeper into DHParser, there are In case you are interested in getting deeper into DHParser, there are
some bigger projects, below: some bigger projects, below:
...@@ -27,109 +30,24 @@ some bigger projects, below: ...@@ -27,109 +30,24 @@ some bigger projects, below:
Ideas for further development Ideas for further development
============================= =============================
Testing for specific error messages
-----------------------------------
Allow testing of error reporting by extending testing.grammar_unit in
such a way that it is possible to test for specific error codes
Better error reporting I
------------------------
A problem with error reporting consists in the fact that at best only
the very first parsing error is reported accurately and then triggers a
number of pure follow up errors. Stopping after the first error would
mean that in order for the user to detect all (true) errors in his or
her file, the parser would have to be run just as many times.
A possible solution could be to define reentry points that can be caught
by a regular expression and where the parsing process restarts in a
defined way.
A reentry point could be defined as a pair (regular expression, parser)
or a triple (regular expression, parent parser, parser), where "parent
parser" would be the parser in the call stack to which the parsing
process retreats, before restarting.
A challenge could be to manage a clean retreat (captured variables,
left recursion stack, etc. without making the parser guard (see
`parse.add_parser_guard`) more complex than it already is.
Also, a good variety of test cases would be desirable.
Optimization and Enhancement: Two-way-Traversal for AST-Transformation
----------------------------------------------------------------------
AST-transformation are done via a depth-first tree-traversal, that is,
the traversal function first goes all the way up the tree to the leaf
nodes and calls the transformation routines successively on the way
down. The routines are picked from the transformation-table which is a
dictionary mapping Node's tag names to sequences of transformation
functions.
The rationale for depth-first is that it is easier to transform a node,
if all of its children have already been transformed, i.e. simplified.
However, there are quite a few cases where depth-last would be better.
For example if you know you are going to discard a whole branch starting
from a certain node, it is a waste to transform all the child nodes
first.
As the tree is traversed anyway, there is no good reason why certain Validation of Abstract Syntax Trees
transformation routines should not already be called on the way up. Of -----------------------------------
course, as most routines more or less assume depth first, we would need
two transformation tables one for the routines that are called on the
way up. And one for the routines that are called on the way down.
This should be fairly easy to implement.
Optimization: Early discarding of nodes
---------------------------------------
Reason: `traverse_recursive` and `Node.result-setter` are top time
consumers!
Allow to specify parsers/nodes, the result of which will be dropped
right away, so that the nodes they produce do not need to be removed
during the AST-Transformations. Typical candidates would be:
1. Tokens ":_Token"
2. Whitespace ":Whitespace" (in some cases)
3. empty Nodes
and basically anything that would be removed globally ("+" entry in the
AST-Transformation dictionary) later anyway. A directive ("@discarable =
...") could be introduced to specify the discardables
Challenges:
1. Discardable Nodes should not even be created in the first place to
avoid costly object creation and assignment of result to the Node
object on creation.
2. ...but discarded or discardable nodes are not the same as a not
matching parser. Possible solution would be to introduce a
dummy/zombie-Node that will be discarded by the calling Parser, i.e.
ZeroOrMore, Series etc.
3. Two kinds of conditions for discarding...?
4. Capture/Retrieve/Pop - need the parsed data even if the node would
otherwise be discardable (Example: Variable Delimiters.) So, either:
a. temporarily suspend discarding by Grammar-object-flag set and
cleared by Capture/Retrieve/Pop. Means yet another flag has to be
checked every time the decision to discard or not needs to be
taken...
b. statically check (i.e. check at compile time) that Presently, defining the transformations from the concrete to the
Capture/Retrieve/Pop neither directly nor indirectly call a abstract syntax tree is at best a test-driven trial and error process.
discardable parser. Downside: Some parsers cannot profit from the A first step to allow for a more systematic process, might be to
optimization. For example variable delimiters, otherwise as all support structural validation of abstract syntax trees. One could
delimiters a good candidate for discarding cannot be discarded any either use the Abstract Syntax Description Language described in:
more. https://www.cs.princeton.edu/research/techreps/TR-554-97
(my preferred choice) or, potentially, any of the XML-validation
techniques, like RelaxNG.
The next step and, evidently, the hard part would be to not only
validate vasious specimen of abstract syntax trees, but to verify
automatically, that, given a certain grammar and a table of
transformations, the abstract syntax tree thet any well formed
source code yields is valid according to the structural definition.
Debugging Debugging
......
...@@ -99,7 +99,7 @@ from DHParser import logging, is_filename, load_if_file, \\ ...@@ -99,7 +99,7 @@ from DHParser import logging, is_filename, load_if_file, \\
keep_children, is_one_of, not_one_of, has_content, apply_if, remove_first, remove_last, \\ keep_children, is_one_of, not_one_of, has_content, apply_if, remove_first, remove_last, \\
remove_anonymous_empty, keep_nodes, traverse_locally, strip, lstrip, rstrip, \\ remove_anonymous_empty, keep_nodes, traverse_locally, strip, lstrip, rstrip, \\
replace_content, replace_content_by, forbid, assert_content, remove_infix_operator, \\ replace_content, replace_content_by, forbid, assert_content, remove_infix_operator, \\
error_on, recompile_grammar, left_associative, swing_left, GLOBALS error_on, recompile_grammar, left_associative, lean_left, GLOBALS
'''.format(dhparserdir=dhparserdir) '''.format(dhparserdir=dhparserdir)
...@@ -315,7 +315,7 @@ def get_preprocessor() -> PreprocessorFunc: ...@@ -315,7 +315,7 @@ def get_preprocessor() -> PreprocessorFunc:
GRAMMAR_FACTORY = ''' GRAMMAR_FACTORY = '''
def get_grammar() -> {NAME}Grammar: def get_grammar() -> {NAME}Grammar:
global GLOBALS """Returns a thread/process-exclusive {NAME}Grammar-singleton."""
try: try:
grammar = GLOBALS.{NAME}_{ID:08d}_grammar_singleton grammar = GLOBALS.{NAME}_{ID:08d}_grammar_singleton
except AttributeError: except AttributeError:
...@@ -328,14 +328,17 @@ def get_grammar() -> {NAME}Grammar: ...@@ -328,14 +328,17 @@ def get_grammar() -> {NAME}Grammar:
TRANSFORMER_FACTORY = ''' TRANSFORMER_FACTORY = '''
def {NAME}Transform() -> TransformationFunc: def Create{NAME}Transformer() -> TransformationFunc:
"""Creates a transformation function that does not share state with other
threads or processes."""
return partial(traverse, processing_table={NAME}_AST_transformation_table.copy()) return partial(traverse, processing_table={NAME}_AST_transformation_table.copy())
def get_transformer() -> TransformationFunc: def get_transformer() -> TransformationFunc:
"""Returns a thread/process-exclusive transformation function."""
try: try:
transformer = GLOBALS.{NAME}_{ID:08d}_transformer_singleton transformer = GLOBALS.{NAME}_{ID:08d}_transformer_singleton
except AttributeError: except AttributeError:
GLOBALS.{NAME}_{ID:08d}_transformer_singleton = {NAME}Transform() GLOBALS.{NAME}_{ID:08d}_transformer_singleton = Create{NAME}Transformer()
transformer = GLOBALS.{NAME}_{ID:08d}_transformer_singleton transformer = GLOBALS.{NAME}_{ID:08d}_transformer_singleton
return transformer return transformer
''' '''
...@@ -343,6 +346,7 @@ def get_transformer() -> TransformationFunc: ...@@ -343,6 +346,7 @@ def get_transformer() -> TransformationFunc:
COMPILER_FACTORY = ''' COMPILER_FACTORY = '''
def get_compiler() -> {NAME}Compiler: def get_compiler() -> {NAME}Compiler:
"""Returns a thread/process-exclusive {NAME}Compiler-singleton."""
try: try:
compiler = GLOBALS.{NAME}_{ID:08d}_compiler_singleton compiler = GLOBALS.{NAME}_{ID:08d}_compiler_singleton
except AttributeError: except AttributeError:
......
...@@ -59,7 +59,7 @@ __all__ = ('TransformationDict', ...@@ -59,7 +59,7 @@ __all__ = ('TransformationDict',
'normalize_whitespace', 'normalize_whitespace',
'move_adjacent', 'move_adjacent',
'left_associative', 'left_associative',
'swing_left', 'lean_left',
'apply_if', 'apply_if',
'apply_unless', 'apply_unless',
'traverse_locally', 'traverse_locally',
...@@ -893,11 +893,17 @@ def left_associative(context: List[Node]): ...@@ -893,11 +893,17 @@ def left_associative(context: List[Node]):
@transformation_factory(collections.abc.Set) @transformation_factory(collections.abc.Set)
def swing_left(context: List[Node], operators: AbstractSet[str]): def lean_left(context: List[Node], operators: AbstractSet[str]):
""" """
Rearranges a node that contains a sub-node on the right Turns a right leaning tree into a left leaning tree:
with a left-associative operator so that the tree structure (op1 a (op2 b c)) -> (op2 (op1 a b) c)
reflects its left-associative character. If a left-associative operator is parsed with a right-recursive
parser, `lean_left' can be used to rearrange the tree structure
so that it properly reflects the order of association.
ATTENTION: This transformation function moves forward recursively,
so grouping nodes must not be eliminated during traversal! This
must be done in a second pass.
""" """
node = context[-1] node = context[-1]
assert node.children and len(node.children) == 2 assert node.children and len(node.children) == 2
...@@ -913,6 +919,8 @@ def swing_left(context: List[Node], operators: AbstractSet[str]): ...@@ -913,6 +919,8 @@ def swing_left(context: List[Node], operators: AbstractSet[str]):
node.result = (right, c) node.result = (right, c)
node.tag_name = op2 node.tag_name = op2
swap_attributes(node, right) swap_attributes(node, right)
# continue recursively on the left branch
lean_left([right], operators)
# @transformation_factory(collections.abc.Set) # @transformation_factory(collections.abc.Set)
......
# Arithmetic # Arithmetic
PLACE A SHORT DESCRIPTION HERE This is going to be a full-fledged grammar for arithmetic expressions.
For simple text-book examples of arithmetic grammars, see the
"ArithemticSimple"-example.
Author: AUTHOR'S NAME <EMAIL>, AFFILIATION STILL WORK IN PROGRESS!!!
Author: Eckhart Arnold (eckhart.arnold@posteo.de)
## License ## License
......
...@@ -42,7 +42,7 @@ div = factor "/" term ...@@ -42,7 +42,7 @@ div = factor "/" term
####################################################################### #######################################################################
factor = [sign] ([element] tail | element) ~ factor = [sign] ([element] tail | element) ~
tail = (seq | tail_elem) [imaginary] tail = (seq | tail_elem) [i]
seq = tail_elem tail seq = tail_elem tail
sign = PLUS | MINUS sign = PLUS | MINUS
...@@ -55,10 +55,10 @@ sign = PLUS | MINUS ...@@ -55,10 +55,10 @@ sign = PLUS | MINUS
element = pow | value element = pow | value
pow = value `^` [sign] element pow = value `^` [sign] element
value = (number | tail_value) [imaginary] value = (number | tail_value) [i]
tail_elem = tail_pow | tail_value tail_elem = tail_pow | tail_value
tail_pow = tail_value [imaginary] `^` element tail_pow = tail_value [i] `^` element
tail_value = special | function | VARIABLE | group tail_value = special | function | VARIABLE | group
group = `(` §expression `)` group = `(` §expression `)`
...@@ -86,7 +86,7 @@ number = NUMBER ...@@ -86,7 +86,7 @@ number = NUMBER
special = (pi | e) special = (pi | e)
pi = `pi` | `π` pi = `pi` | `π`
e = `e` e = `e`
imaginary = `i` i = `i` # imaginary number unit
####################################################################### #######################################################################
......
...@@ -27,7 +27,7 @@ from DHParser import logging, is_filename, load_if_file, \ ...@@ -27,7 +27,7 @@ from DHParser import logging, is_filename, load_if_file, \
Node, TransformationFunc, TransformationDict, transformation_factory, traverse, \ Node, TransformationFunc, TransformationDict, transformation_factory, traverse, \
remove_children_if, move_adjacent, normalize_whitespace, is_anonymous, matches_re, \ remove_children_if, move_adjacent, normalize_whitespace, is_anonymous, matches_re, \
reduce_single_child, replace_by_single_child, replace_or_reduce, remove_whitespace, \ reduce_single_child, replace_by_single_child, replace_or_reduce, remove_whitespace, \
remove_empty, remove_tokens, flatten, is_insignificant_whitespace, is_empty, \ remove_empty, remove_tokens, flatten, is_insignificant_whitespace, is_empty, lean_left, \
collapse, collapse_if, replace_content, WHITESPACE_PTYPE, TOKEN_PTYPE, \ collapse, collapse_if, replace_content, WHITESPACE_PTYPE, TOKEN_PTYPE, \
remove_nodes, remove_content, remove_brackets, change_tag_name, remove_anonymous_tokens, \ remove_nodes, remove_content, remove_brackets, change_tag_name, remove_anonymous_tokens, \
keep_children, is_one_of, not_one_of, has_content, apply_if, remove_first, remove_last, \ keep_children, is_one_of, not_one_of, has_content, apply_if, remove_first, remove_last, \
...@@ -42,11 +42,11 @@ from DHParser import logging, is_filename, load_if_file, \ ...@@ -42,11 +42,11 @@ from DHParser import logging, is_filename, load_if_file, \
# #
####################################################################### #######################################################################
def ArithmeticExperimentalPreprocessor(text): def ArithmeticRightRecursivePreprocessor(text):
return text, lambda i: i return text, lambda i: i
def get_preprocessor() -> PreprocessorFunc: def get_preprocessor() -> PreprocessorFunc:
return ArithmeticExperimentalPreprocessor return ArithmeticRightRecursivePreprocessor
####################################################################### #######################################################################
...@@ -55,15 +55,15 @@ def get_preprocessor() -> PreprocessorFunc: ...@@ -55,15 +55,15 @@ def get_preprocessor() -> PreprocessorFunc:
# #
####################################################################### #######################################################################
class ArithmeticExperimentalGrammar(Grammar): class ArithmeticRightRecursiveGrammar(Grammar):
r"""Parser for an ArithmeticExperimental source file. r"""Parser for an ArithmeticRightRecursive source file.
""" """
element = Forward() element = Forward()
expression = Forward() expression = Forward()
sign = Forward() sign = Forward()
tail = Forward() tail = Forward()
term = Forward() term = Forward()
source_hash__ = "01e521bb6dbca853cb7eef515c2dc8d7" source_hash__ = "57a303f28ffb50a84b86e98c71ea2e32"
static_analysis_pending__ = [True] static_analysis_pending__ = [True]
parser_initialization__ = ["upon instantiation"] parser_initialization__ = ["upon instantiation"]
resume_rules__ = {} resume_rules__ = {}
...@@ -75,7 +75,7 @@ class ArithmeticExperimentalGrammar(Grammar): ...@@ -75,7 +75,7 @@ class ArithmeticExperimentalGrammar(Grammar):
NUMBER = RegExp('(?:0|(?:[1-9]\\d*))(?:\\.\\d+)?') NUMBER = RegExp('(?:0|(?:[1-9]\\d*))(?:\\.\\d+)?')
MINUS = RegExp('-') MINUS = RegExp('-')
PLUS = RegExp('\\+') PLUS = RegExp('\\+')
imaginary = Token("i") i = Token("i")
e = Token("e") e = Token("e")
pi = Alternative(DropToken("pi"), DropToken("π")) pi = Alternative(DropToken("pi"), DropToken("π"))
special = Alternative(pi, e) special = Alternative(pi, e)
...@@ -87,14 +87,14 @@ class ArithmeticExperimentalGrammar(Grammar): ...@@ -87,14 +87,14 @@ class ArithmeticExperimentalGrammar(Grammar):
function = Alternative(sin, cos, tan, log) function = Alternative(sin, cos, tan, log)
group = Series(DropToken("("), expression, DropToken(")"), mandatory=1) group = Series(DropToken("("), expression, DropToken(")"), mandatory=1)
tail_value = Alternative(special, function, VARIABLE, group) tail_value = Alternative(special, function, VARIABLE, group)
tail_pow = Series(tail_value, Option(imaginary), DropToken("^"), element) tail_pow = Series(tail_value, Option(i), DropToken("^"), element)
tail_elem = Alternative(tail_pow, tail_value) tail_elem = Alternative(tail_pow, tail_value)
value = Series(Alternative(number, tail_value), Option(imaginary)) value = Series(Alternative(number, tail_value), Option(i))
pow = Series(value, DropToken("^"), Option(sign), element) pow = Series(value, DropToken("^"), Option(sign), element)
element.set(Alternative(pow, value)) element.set(Alternative(pow, value))
sign.set(Alternative(PLUS, MINUS)) sign.set(Alternative(PLUS, MINUS))
seq = Series(tail_elem, tail) seq = Series(tail_elem, tail)
tail.set(Series(Alternative(seq, tail_elem), Option(imaginary))) tail.set(Series(Alternative(seq, tail_elem), Option(i)))
factor = Series(Option(sign), Alternative(Series(Option(element), tail), element), dwsp__) factor = Series(Option(sign), Alternative(Series(Option(element), tail), element), dwsp__)
div = Series(factor, Series(DropToken("/"), dwsp__), term) div = Series(factor, Series(DropToken("/"), dwsp__), term)
mul = Series(factor, Series(DropToken("*"), dwsp__), term) mul = Series(factor, Series(DropToken("*"), dwsp__), term)
...@@ -104,15 +104,14 @@ class ArithmeticExperimentalGrammar(Grammar): ...@@ -104,15 +104,14 @@ class ArithmeticExperimentalGrammar(Grammar):
expression.set(Alternative(add, sub, term)) expression.set(Alternative(add, sub, term))
root__ = expression root__ = expression
def get_grammar() -> ArithmeticExperimentalGrammar: def get_grammar() -> ArithmeticRightRecursiveGrammar:
global GLOBALS
try: try:
grammar = GLOBALS.ArithmeticExperimental_00000001_grammar_singleton grammar = GLOBALS.ArithmeticRightRecursive_00000001_grammar_singleton
except AttributeError: except AttributeError:
GLOBALS.ArithmeticExperimental_00000001_grammar_singleton = ArithmeticExperimentalGrammar() GLOBALS.ArithmeticRightRecursive_00000001_grammar_singleton = ArithmeticRightRecursiveGrammar()
if hasattr(get_grammar, 'python_src__'): if hasattr(get_grammar, 'python_src__'):
GLOBALS.ArithmeticExperimental_00000001_grammar_singleton.python_src__ = get_grammar.python_src__ GLOBALS.ArithmeticRightRecursive_00000001_grammar_singleton.python_src__ = get_grammar.python_src__
grammar = GLOBALS.ArithmeticExperimental_00000001_grammar_singleton grammar = GLOBALS.ArithmeticRightRecursive_00000001_grammar_singleton
return grammar return grammar
...@@ -123,22 +122,39 @@ def get_grammar() -> ArithmeticExperimentalGrammar: ...@@ -123,22 +122,39 @@ def get_grammar() -> ArithmeticExperimentalGrammar:
####################################################################### #######################################################################
ArithmeticExperimental_AST_transformation_table = { ArithmeticRightRecursive_AST_transformation_table = {
# AST Transformations for the ArithmeticExperimental-grammar # AST Transformations for the ArithmeticRightRecursive-grammar
# "<": flatten_anonymous_nodes, # "<": flatten_anonymous_nodes,
"expression, term, sign, group, factor": [replace_by_single_child], "special, number, function, tail_value, tail_elem, tail, value, "
"element, factor, term, expression":
[replace_by_single_child],
"pi": [replace_content_by('π')],
"tail_pow": [change_tag_name('pow')],
"add, sub": [lean_left({'sub', 'add'})],
"mul, div": [lean_left({'mul', 'div'})]
} }
def ArithmeticExperimentalTransform() -> TransformationFunc: def ArithmeticRightRecursiveTransform() -> TransformationFunc:
return partial(traverse, processing_table=ArithmeticExperimental_AST_transformation_table.copy()) def transformation_func(cst: Node, pass_1, pass_2):
"""Special transformation function requires two passes, because
otherwise elimination of grouping nodes (pass 2) would interfere
with the adjustment of the tree structure to the left-associativity
of the `add`, `sub`, `mul` and `div` operators."""
traverse(cst, pass_1)
traverse(cst, pass_2)
return partial(transformation_func,
pass_1=ArithmeticRightRecursive_AST_transformation_table.copy(),
pass_2={'group': [replace_by_single_child]}.copy())
def get_transformer() -> TransformationFunc: def get_transformer() -> TransformationFunc:
try: try:
transformer = GLOBALS.ArithmeticExperimental_00000001_transformer_singleton transformer = GLOBALS.ArithmeticRightRecursive_00000001_transformer_singleton
except AttributeError: except AttributeError:
GLOBALS.ArithmeticExperimental_00000001_transformer_singleton = ArithmeticExperimentalTransform() GLOBALS.ArithmeticRightRecursive_00000001_transformer_singleton = \
transformer = GLOBALS.ArithmeticExperimental_00000001_transformer_singleton ArithmeticRightRecursiveTransform()
transformer = GLOBALS.ArithmeticRightRecursive_00000001_transformer_singleton
return transformer return transformer
...@@ -148,12 +164,12 @@ def get_transformer() -> TransformationFunc: ...@@ -148,12 +164,12 @@ def get_transformer() -> TransformationFunc:
# #
####################################################################### #######################################################################
class ArithmeticExperimentalCompiler(Compiler): class ArithmeticRightRecursiveCompiler(Compiler):
"""Compiler for the abstract-syntax-tree of a ArithmeticExperimental source file. """Compiler for the abstract-syntax-tree of a ArithmeticRightRecursive source file.
""" """
def __init__(self): def __init__(self):
super(ArithmeticExperimentalCompiler, self).__init__() super(ArithmeticRightRecursiveCompiler, self).__init__()
def _reset(self): def _reset(self):
super()._reset() super()._reset()
...@@ -174,12 +190,12 @@ class ArithmeticExperimentalCompiler(Compiler): ...@@ -174,12 +190,12 @@ class ArithmeticExperimentalCompiler(Compiler):
# return node # return node
def get_compiler() -> ArithmeticExperimentalCompiler: def get_compiler() -> ArithmeticRightRecursiveCompiler:
try: try:
compiler = GLOBALS.ArithmeticExperimental_00000001_compiler_singleton compiler = GLOBALS.ArithmeticRightRecursive_00000001_compiler_singleton
except AttributeError: except AttributeError:
GLOBALS.ArithmeticExperimental_00000001_compiler_singleton = ArithmeticExperimentalCompiler() GLOBALS.ArithmeticRightRecursive_00000001_compiler_singleton = ArithmeticRightRecursiveCompiler()
compiler = GLOBALS.ArithmeticExperimental_00000001_compiler_singleton compiler = GLOBALS.ArithmeticRightRecursive_00000001_compiler_singleton
return compiler return compiler
...@@ -231,4 +247,4 @@ if __name__ == "__main__": ...@@ -231,4 +247,4 @@ if __name__ == "__main__":
else: else:
print(result.as_xml() if isinstance(result, Node) else result) print(result.as_xml() if isinstance(result, Node) else result)
else: else:
print("Usage: ArithmeticExperimentalCompiler.py [FILENAME]") print("Usage: ArithmeticRightRecursiveCompiler.py [FILENAME]")
# Arithmetic # ArithmeticRightRecursive
PLACE A SHORT DESCRIPTION HERE This is going to be a full-fledged grammar for arithmetic expressions.
For simple text-book examples of arithmetic grammars, see the
"ArithemticSimple"-example.
Author: AUTHOR'S NAME <EMAIL>, AFFILIATION STILL WORK IN PROGRESS!!!
Author: Eckhart Arnold (eckhart.arnold@posteo.de)
## License ## License
......