HEX
Server: Apache
System: Linux ip-172-26-2-106 6.1.0-37-cloud-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.140-1 (2025-05-22) x86_64
User: daemon (1)
PHP: 8.2.28
Disabled: NONE
Upload Files
File: //usr/lib/python3/dist-packages/pygments/__pycache__/lexer.cpython-311.pyc
�

�E�c�|����dZddlZddlZddlZddlmZmZddlmZddl	m
Z
mZmZm
Z
mZddlmZmZmZmZmZmZddlmZgd�Zejd	��Zgd
�Zed���ZGd�d
e��ZGd�de���Z Gd�de ��Z!Gd�de"��Z#Gd�d��Z$e$��Z%Gd�de&��Z'Gd�d��Z(d�Z)Gd�d��Z*e*��Z+d�Z,Gd�d ��Z-Gd!�d"e��Z.Gd#�d$e��Z/Gd%�d&e e/���Z0Gd'�d(��Z1Gd)�d*e0��Z2d+�Z3Gd,�d-e/��Z4Gd.�d/e0e4���Z5dS)0z�
    pygments.lexer
    ~~~~~~~~~~~~~~

    Base lexer classes.

    :copyright: Copyright 2006-2022 by the Pygments team, see AUTHORS.
    :license: BSD, see LICENSE for details.
�N)�
apply_filters�Filter)�get_filter_by_name)�Error�Text�Other�
Whitespace�
_TokenType)�get_bool_opt�get_int_opt�get_list_opt�make_analysator�Future�guess_decode)�	regex_opt)
�Lexer�
RegexLexer�ExtendedRegexLexer�DelegatingLexer�LexerContext�include�inherit�bygroups�using�this�default�words�line_rez.*?
))s�utf-8)s��zutf-32)s��zutf-32be)s��zutf-16)s��zutf-16bec��dS)N��)�xs �0/usr/lib/python3/dist-packages/pygments/lexer.py�<lambda>r%"s��#��c��eZdZdZd�ZdS)�	LexerMetaz�
    This metaclass automagically converts ``analyse_text`` methods into
    static methods which always return float values.
    c�t�d|vrt|d��|d<t�||||��S)N�analyse_text)r�type�__new__)�mcs�name�bases�ds    r$r,zLexerMeta.__new__+s<���Q��� /��.�0A� B� B�A�n���|�|�C��u�a�0�0�0r&N)�__name__�
__module__�__qualname__�__doc__r,r"r&r$r(r(%s-��������
1�1�1�1�1r&r(c�T�eZdZdZdZdZgZgZgZgZ	dZ
d�Zd�Zd�Z
d�Zdd	�Zd
�ZdS)ra�
    Lexer for a specific language.

    Basic options recognized:
    ``stripnl``
        Strip leading and trailing newlines from the input (default: True).
    ``stripall``
        Strip all leading and trailing whitespace from the input
        (default: False).
    ``ensurenl``
        Make sure that the input ends with a newline (default: True).  This
        is required for some lexers that consume input linewise.

        .. versionadded:: 1.3

    ``tabsize``
        If given and greater than 0, expand tabs in the input (default: 0).
    ``encoding``
        If given, must be an encoding name. This encoding will be used to
        convert the input string to Unicode, if it is not already a Unicode
        string (default: ``'guess'``, which uses a simple UTF-8 / Locale /
        Latin1 detection.  Can also be ``'chardet'`` to use the chardet
        library, if it is installed.
    ``inencoding``
        Overrides the ``encoding`` if given.
    Nrc��||_t|dd��|_t|dd��|_t|dd��|_t|dd��|_|�dd	��|_|�d
��p|j|_g|_	t|dd��D]}|�|���dS)
N�stripnlT�stripallF�ensurenl�tabsizer�encoding�guess�
inencoding�filtersr")�optionsrr7r8r9rr:�getr;r>r
�
add_filter)�selfr?�filter_s   r$�__init__zLexer.__init__bs������#�G�Y��=�=���$�W�j�%�@�@��
�$�W�j�$�?�?��
�"�7�I�q�9�9������J��8�8��
����L�1�1�B�T�]��
����#�G�Y��;�;�	%�	%�G��O�O�G�$�$�$�$�	%�	%r&c�^�|jrd|jj�d|j�d�Sd|jjzS)Nz<pygments.lexers.z with �>z<pygments.lexers.%s>)r?�	__class__r1�rBs r$�__repr__zLexer.__repr__nsH���<�	D�	D�59�^�5L�5L�5L�59�\�\�\�C�
C�*�D�N�,C�C�Cr&c�~�t|t��s
t|fi|��}|j�|��dS)z8
        Add a new stream filter to this lexer.
        N)�
isinstancerrr>�append)rBrCr?s   r$rAzLexer.add_filterusG���'�6�*�*�	=�(��<�<�G�<�<�G�����G�$�$�$�$�$r&c��dS)a~
        Has to return a float between ``0`` and ``1`` that indicates
        if a lexer wants to highlight this text. Used by ``guess_lexer``.
        If this method returns ``0`` it won't highlight it in any case, if
        it returns ``1`` highlighting with this lexer is guaranteed.

        The `LexerMeta` metaclass automatically wraps this function so
        that it works like a static method (no ``self`` or ``cls``
        parameter) and the return value is automatically converted to
        `float`. If the return value is an object that is boolean `False`
        it's the same as if the return values was ``0.0``.
        Nr")�texts r$r*zLexer.analyse_text}s���r&Fc����t�t���s7�jdkrt���\�}�nD�jdkr�	ddl}n"#t
$r}td��|�d}~wwxYwd}tD]G\}}��|��r-�t|��d��	|d��}n�H|�H|�
�dd���}	��	|	�d��pd	d��}|�ns��	�j�����d
��r�td
��d��n,��d
��r�td
��d����dd�����d
d����j
r�����n�jr��d����jdkr���j����jr��d��s�dz
���fd�}
|
��}|st)|�j���}|S)a=
        Return an iterable of (tokentype, value) pairs generated from
        `text`. If `unfiltered` is set to `True`, the filtering mechanism
        is bypassed even if filters are defined.

        Also preprocess the text, i.e. expand tabs and strip it if
        wanted and applies registered filters.
        r<�chardetrNzkTo enable chardet encoding guessing, please install the chardet library from http://chardet.feedparser.org/�replaceir;ruz
�
�
c3�P�K������D]\}}}||fV��
dS�N)�get_tokens_unprocessed)�_�t�vrBrNs   ��r$�streamerz"Lexer.get_tokens.<locals>.streamer�sC������6�6�t�<�<�
�
���1�a���d�
�
�
�
�
�
r&)rK�strr;rrP�ImportError�
_encoding_map�
startswith�len�decode�detectr@rQr8�stripr7r:�
expandtabsr9�endswithrr>)rBrN�
unfilteredrWrP�e�decoded�bomr;�encrZ�streams``          r$�
get_tokenszLexer.get_tokens�s������$��$�$�	,��}��'�'�&�t�,�,���a�a���)�+�+�T�"�N�N�N�N��"�T�T�T�%�'L�M�M�RS�T�����T����
��%2���M�C�����s�+�+��"&�s�3�x�x�y�y�/�"8�"8��9�"M�"M������?�!�.�.��e�t�e��5�5�C�"�k�k�#�'�'�*�*=�*=�*H��*3�5�5�G�����{�{�4�=�1�1���?�?�8�,�,�0���H�
�
���/�D�����x�(�(�
,��C��M�M�N�N�+���|�|�F�D�)�)���|�|�D�$�'�'���=�	$��:�:�<�<�D�D�
�\�	$��:�:�d�#�#�D��<�!����?�?�4�<�0�0�D��=�	����t�!4�!4�	��D�L�D�	�	�	�	�	�	�������	?�"�6�4�<��>�>�F��
s�A	�	
A(�A#�#A(c��t�)z�
        Return an iterable of (index, tokentype, value) pairs where "index"
        is the starting position of the token within the input text.

        In subclasses, implement this method as a generator to
        maximize effectiveness.
        )�NotImplementedError)rBrNs  r$rVzLexer.get_tokens_unprocessed�s
��"�!r&)F)r1r2r3r4r.�url�aliases�	filenames�alias_filenames�	mimetypes�priorityrDrIrAr*rkrVr"r&r$rr1s���������8�D��C��G��I��O��I��H�
%�
%�
%�D�D�D�%�%�%����9�9�9�9�v"�"�"�"�"r&r)�	metaclassc�"�eZdZdZefd�Zd�ZdS)ra 
    This lexer takes two lexer as arguments. A root lexer and
    a language lexer. First everything is scanned using the language
    lexer, afterwards all ``Other`` tokens are lexed using the root
    lexer.

    The lexers from the ``template`` lexer package use this base lexer.
    c�l�|di|��|_|di|��|_||_tj|fi|��dS�Nr")�
root_lexer�language_lexer�needlerrD)rB�_root_lexer�_language_lexer�_needler?s     r$rDzDelegatingLexer.__init__�sV��%�+�0�0��0�0���-�o�8�8��8�8������
��t�'�'�w�'�'�'�'�'r&c��d}g}g}|j�|��D]U\}}}||jur.|r&|�t	|��|f��g}||z
}�=|�|||f���V|r$|�t	|��|f��t||j�|����S)N�)ryrVrzrLr_�
do_insertionsrx)rBrN�buffered�
insertions�
lng_buffer�irXrYs        r$rVz&DelegatingLexer.get_tokens_unprocessed�s������
��
��*�A�A�$�G�G�	-�	-�G�A�q�!��D�K����$��%�%�s�8�}�}�j�&A�B�B�B�!#�J��A�
����!�!�1�a��)�,�,�,�,��	;����s�8�}�}�j�9�:�:�:��Z�!�_�C�C�H�M�M�O�O�	Or&N)r1r2r3r4rrDrVr"r&r$rr�sL��������>C�(�(�(�(�O�O�O�O�Or&rc��eZdZdZdS)rzI
    Indicates that a state should include rules from another state.
    N�r1r2r3r4r"r&r$rr�s��������	�Dr&rc��eZdZdZd�ZdS)�_inheritzC
    Indicates the a state should inherit from its superclass.
    c��dS)Nrr"rHs r$rIz_inherit.__repr__s���yr&N)r1r2r3r4rIr"r&r$r�r��s-������������r&r�c��eZdZdZd�Zd�ZdS)�combinedz:
    Indicates a state combined from multiple states.
    c�8�t�||��SrU)�tupler,)�cls�argss  r$r,zcombined.__new__s���}�}�S�$�'�'�'r&c��dSrUr")rBr�s  r$rDzcombined.__init__s���r&N)r1r2r3r4r,rDr"r&r$r�r�	s<��������(�(�(�
�
�
�
�
r&r�c�<�eZdZdZd�Zd	d�Zd	d�Zd	d�Zd�Zd�Z	dS)
�_PseudoMatchz:
    A pseudo match object constructed from a string.
    c�"�||_||_dSrU)�_text�_start)rB�startrNs   r$rDz_PseudoMatch.__init__s����
�����r&Nc��|jSrU)r��rB�args  r$r�z_PseudoMatch.starts
���{�r&c�:�|jt|j��zSrU)r�r_r�r�s  r$�endz_PseudoMatch.end"s���{�S���_�_�,�,r&c�2�|rtd���|jS)Nz
No such group)�
IndexErrorr�r�s  r$�groupz_PseudoMatch.group%s ���	.��_�-�-�-��z�r&c��|jfSrU)r�rHs r$�groupsz_PseudoMatch.groups*s���
�}�r&c��iSrUr"rHs r$�	groupdictz_PseudoMatch.groupdict-s���	r&rU)
r1r2r3r4rDr�r�r�r�r�r"r&r$r�r�s����������������-�-�-�-�����
�������r&r�c���d�fd�	}|S)zL
    Callback that yields multiple actions for each group in the match.
    Nc
3��K�t���D]�\}}|��t|��tur8|�|dz��}|r|�|dz��||fV��V|�|dz��}|�Y|r|�|dz��|_||t
|�|dz��|��|��D]}|r|V��	��|r|���|_dSdS)N�)�	enumerater+r
r�r��posr�r�)�lexer�match�ctxr��action�data�itemr�s       �r$�callbackzbygroups.<locals>.callback5s*�����"�4���	'�	'�I�A�v��~���f����+�+��{�{�1�q�5�)�)���;��+�+�a�!�e�,�,�f�d�:�:�:�:���{�{�1�q�5�)�)���#��5�"'�+�+�a�!�e�"4�"4��� &��u�'3�E�K�K��A��4F�4F��'M�'M�s�!T�!T�'�'���'�"&�J�J�J����	"��i�i�k�k�C�G�G�G�	"�	"r&rUr")r�r�s` r$rr1s(���"�"�"�"�"�"�&�Or&c��eZdZdZdS)�_ThiszX
    Special singleton used for indicating the caller class.
    Used by ``using``.
    Nr�r"r&r$r�r�Ks���������r&r�c������i�d�vr>��d��}t|ttf��r|�d<nd|f�d<�turd��fd�	}nd���fd�	}|S)a�
    Callback that processes the match with a different lexer.

    The keyword arguments are forwarded to the lexer, except `state` which
    is handled separately.

    `state` specifies the state that the new lexer will start in, and can
    be an enumerable such as ('root', 'inline', 'string') or a simple
    string which is assumed to be on top of the root state.

    Note: For that to work, `_other` must not be an `ExtendedRegexLexer`.
    �state�stack�rootNc3�(�K��	r(�	�|j��|jdi�	��}n|}|���}|j|���fi���D]\}}}||z||fV��|r|���|_dSdSrw)�updater?rGr�rVr�r�r�)
r�r�r��lx�sr�rXrY�	gt_kwargs�kwargss
        ��r$r�zusing.<locals>.callbackjs�������
��
�
�e�m�,�,�,�$�U�_�.�.�v�.�.��������
�
�A�4�2�4�U�[�[�]�]�P�P�i�P�P�
"�
"���1�a��!�e�Q��k�!�!�!�!��
&��)�)�+�+�����
&�
&r&c3��K��
�|j���di�
��}|���}|j|���fi�	��D]\}}}||z||fV��|r|���|_dSdSrw)r�r?r�rVr�r�r�)r�r�r�r�r�r�rXrY�_otherr�r�s        ���r$r�zusing.<locals>.callbackys�������M�M�%�-�(�(�(���!�!�&�!�!�B����
�
�A�4�2�4�U�[�[�]�]�P�P�i�P�P�
"�
"���1�a��!�e�Q��k�!�!�!�!��
&��)�)�+�+�����
&�
&r&rU)�poprK�listr�r)r�r�r�r�r�s``  @r$rrTs�������I��&����J�J�w�����a�$���'�'�	-�!"�I�g���"(�!��I�g��
��~�~�
	&�
	&�
	&�
	&�
	&�
	&�
	&�
	&�		&�		&�		&�		&�		&�		&�		&�		&��Or&c��eZdZdZd�ZdS)rz�
    Indicates a state or state action (e.g. #pop) to apply.
    For example default('#pop') is equivalent to ('', Token, '#pop')
    Note that state tuples may be used as well.

    .. versionadded:: 2.0
    c��||_dSrU)r�)rBr�s  r$rDzdefault.__init__�s
����
�
�
r&N)r1r2r3r4rDr"r&r$rr�s-������������r&rc� �eZdZdZdd�Zd�ZdS)rz�
    Indicates a list of literal words that is transformed into an optimized
    regex that matches any of the words.

    .. versionadded:: 2.0
    rc�0�||_||_||_dSrU)r�prefix�suffix)rBrr�r�s    r$rDzwords.__init__�s����
��������r&c�D�t|j|j|j���S)N�r�r�)rrr�r�rHs r$r@z	words.get�s�����D�K���L�L�L�Lr&N)rr)r1r2r3r4rDr@r"r&r$rr�sF������������
M�M�M�M�Mr&rc�>�eZdZdZd�Zd�Zd�Zd�Zd
d�Zd�Z	d	�Z
dS)�RegexLexerMetazw
    Metaclass for RegexLexer, creates the self._tokens attribute from
    self.tokens on the first instantiation.
    c��t|t��r|���}tj||��jS)zBPreprocess the regular expression component of a token definition.)rKrr@�re�compiler�)r��regex�rflagsr�s    r$�_process_regexzRegexLexerMeta._process_regex�s6���e�V�$�$�	 ��I�I�K�K�E��z�%��(�(�.�.r&c�j�t|��tust|��s
Jd|�����|S)z5Preprocess the token component of a token definition.z0token type must be simple type or callable, not )r+r
�callable)r��tokens  r$�_process_tokenzRegexLexerMeta._process_token�s<���E�{�{�j�(�(�H�U�O�O�(�(�(�DI�E�K�)�(�(��r&c�2�t|t��rJ|dkrdS||vr|fS|dkr|S|dd�dkrt|dd���SJd|z���t|t��rfd	|jz}|xjd
z
c_g}|D]?}||ks
Jd|z���|�|�|||�����@|||<|fSt|t��r|D]}||vs|dvs
Jd
|z����|SJd|z���)z=Preprocess the state transition action of a token definition.�#pop����#pushN�z#pop:Fzunknown new state %rz_tmp_%dr�zcircular state ref %r)r�r�zunknown new state zunknown new state def %r)rKr[�intr��_tmpname�extend�_process_stater�)r��	new_state�unprocessed�	processed�	tmp_state�itokens�istates       r$�_process_new_statez!RegexLexerMeta._process_new_state�s����i��%�%�	A��F�"�"��r��k�)�)�!�|�#��g�%�%� � ��2�A�2��'�)�)��I�a�b�b�M�*�*�*�*�@�4�y�@�@�@�@�
�	�8�
,�
,�	A�!�C�L�0�I��L�L�A��L�L��G�#�
F�
F����*�*�*�,C�f�,L�*�*�*����s�1�1�+�2;�V� E� E�F�F�F�F�#*�I�i� ��<��
�	�5�
)�
)�	A�#�
2�
2���+�-�-��"3�3�3�3�(�6�1�4�3�3����@�4�y�@�@�@�@r&c�2�t|��tus
Jd|z���|ddks
Jd|z���||vr||Sgx}||<|j}||D�]�}t|t��rK||ks
Jd|z���|�|�||t|�������ct|t��r�yt|t��rL|�	|j
||��}|�tj
d��jd|f����t|��tus
Jd|z���	|�|d||��}n4#t"$r'}	t%d	|d�d
|�d|�d|	����|	�d}	~	wwxYw|�|d
��}
t)|��dkrd}n|�	|d||��}|�||
|f�����|S)z%Preprocess a single state definition.zwrong state name %rr�#zinvalid state name %rzcircular state reference %rrNzwrong rule def %rzuncompilable regex z
 in state z of z: r��)r+r[�flagsrKrr�r�r�rr�r�rLr�r�r�r�r��	Exception�
ValueErrorr�r_)r�r�r�r��tokensr��tdefr��rex�errr�s           r$r�zRegexLexerMeta._process_state�sg���E�{�{�c�!�!�!�#8�5�#@�!�!�!��Q�x�3���� 7�%� ?�����I����U�#�#�$&�&���5�!������&�!	3�!	3�D��$��(�(�
��u�}�}�}�&C�e�&K�}�}�}��
�
�c�0�0��i�14�T���<�<�=�=�=���$��)�)�
���$��(�(�
��2�2�4�:�{�I�V�V�	��
�
�r�z�"�~�~�3�T�9�E�F�F�F����:�:��&�&�&�(;�d�(B�&�&�&�
F��(�(��a��&�%�@�@�����
F�
F�
F� �j�"&�q�'�'�'�5�5�5�#�#�#�s�s�"<�=�=�BE�F�����
F�����&�&�t�A�w�/�/�E��4�y�y�A�~�~� �	�	��2�2�4��7�3>�	�K�K�	�
�M�M�3��y�1�2�2�2�2��
s�E;�;
F,�"F'�'F,Nc��ix}|j|<|p|j|}t|��D]}|�|||���|S)z-Preprocess a dictionary of token definitions.)�_all_tokensr�r�r�)r�r.�	tokendefsr�r�s     r$�process_tokendefzRegexLexerMeta.process_tokendefsZ��,.�.�	�C�O�D�)��1���D�!1�	��)�_�_�	<�	<�E����y�)�U�;�;�;�;��r&c���i}i}|jD]�}|j�di��}|���D]�\}}|�|��}|�7|||<	|�t
��}n#t$rY�IwxYw|||<�S|�|d��}|��l||||dz�<	|�t
��}	||	z||<��#t$rY��wxYw��|S)a
        Merge tokens from superclasses in MRO order, returning a single tokendef
        dictionary.

        Any state that is not defined by a subclass will be inherited
        automatically.  States that *are* defined by subclasses will, by
        default, override that state in the superclass.  If a subclass wishes to
        inherit definitions from a superclass, it can use the special value
        "inherit", which will cause the superclass' state definition to be
        included at that point in the state.
        r�Nr�)�__mro__�__dict__r@�items�indexrr�r�)
r�r��inheritable�c�toksr�r��curitems�inherit_ndx�new_inh_ndxs
          r$�
get_tokendefszRegexLexerMeta.get_tokendefssG��������	C�	C�A��:�>�>�(�B�/�/�D� $�
�
���
C�
C���u�!�:�:�e�,�,���#�
%*�F�5�M�!�&+�k�k�'�&:�&:����%�!�!�!� ��!����)4�K��&��)�o�o�e�T�:�:���&��7<���[��]�2�3�C�#(�+�+�g�"6�"6�K�*5�{�)B�K��&�&��"�����D�����3
C�<�
s$�A:�:
B�B�4C�
C$�#C$c���d|jvrSi|_d|_t|d��r|jrn-|�d|�����|_tj	|g|�Ri|��S)z:Instantiate cls after preprocessing its token definitions.�_tokensr�token_variantsr)
r�r�r��hasattrr�r�r�r�r+�__call__)r�r��kwdss   r$rzRegexLexerMeta.__call__<s����C�L�(�(� �C�O��C�L��s�,�-�-�
L�#�2D�
L��!�2�2�2�s�7H�7H�7J�7J�K�K����}�S�0�4�0�0�0�4�0�0�0r&rU)r1r2r3r4r�r�r�r�r�r�rr"r&r$r�r��s���������
/�/�/����!A�!A�!A�F*�*�*�X����/�/�/�b1�1�1�1�1r&r�c�,�eZdZdZejZiZdd�ZdS)rz�
    Base for simple stateful regular expression-based lexers.
    Simplifies the lexing process so that you need only
    provide a list of states and regular expressions.
    �r�c#��K�d}|j}t|��}||d}	|D�]p\}}}	|||��}
|
�rZ|�Bt|��tur|||
���fV�n|||
��Ed{V��|
���}|	��t
|	t��rk|	D]g}|dkr(t|��dkr|�	���0|dkr|�
|d���R|�
|���hnpt
|	t��r,t|	��t|��kr|dd�=n5||	d�=n/|	dkr|�
|d��n
Jd|	z���||d}nV��r	||d	krd
g}|d
}|td	fV�|dz
}���|t||fV�|dz
}n#t$rYdSwxYw���)z~
        Split ``text`` into (tokentype, text) pairs.

        ``stack`` is the initial stack (default: ``['root']``)
        rr�r�Nr�r�F�wrong state def: %rrRr�)r�r�r+r
r�r�rKr�r_r�rLr��absr	rr�)rBrNr�r�r��
statestack�statetokens�rexmatchr�r��mr�s            r$rVz!RegexLexer.get_tokens_unprocessedms��������L�	��%�[�[�
��
�2��/��1	�/:�0
�0
�+��&�)��H�T�3�'�'�����)���<�<�:�5�5�"%�v�q�w�w�y�y�"8�8�8�8�8�'-�v�d�A���6�6�6�6�6�6�6��%�%�'�'�C� �,�%�i��7�7�L�)2�=�=��#(�F�?�?�'*�:����':�':�(2���(8�(8�(8��%*�g�%5�%5�$.�$5�$5�j��n�$E�$E�$E�$E�$.�$5�$5�e�$<�$<�$<�$<�=�(�	�3�7�7�L� #�9�~�~��Z���@�@�$.�q�r�r�N�N�$.�y�z�z�$:�$:�&�'�1�1�&�-�-�j��n�=�=�=�=�K�*?�)�*K�K�K�K�&/�
�2��&?���E�?�F��C�y�D�(�(�&,�X�
�&/��&7��!�:�t�3�3�3�3��q��� ��u�d�3�i�/�/�/�/��1�H�C�C��!�����E�E�����a1	s�(G!�	G!�!
G/�.G/N�r)	r1r2r3r4r��	MULTILINEr�r�rVr"r&r$rrJsB��������
�L�E�0�F�;�;�;�;�;�;r&rc� �eZdZdZdd�Zd�ZdS)rz9
    A helper object that holds lexer position data.
    Nc�b�||_||_|pt|��|_|pdg|_dS)Nr�)rNr�r_r�r�)rBrNr�r�r�s     r$rDzLexerContext.__init__�s4����	�����#�#�d�)�)����&�v�h��
�
�
r&c�8�d|j�d|j�d|j�d�S)Nz
LexerContext(z, �))rNr�r�rHs r$rIzLexerContext.__repr__�s'����I�I�I�t�x�x�x�����-�	-r&�NN)r1r2r3r4rDrIr"r&r$rr�sA��������'�'�'�'�-�-�-�-�-r&rc��eZdZdZdd�ZdS)rzE
    A RegexLexer that uses a context object to store its state.
    Nc#�`K�|j}|st|d��}|d}n|}||jd}|j}	|D�]�\}}}|||j|j��}	|	�r�|�vt
|��tur8|j||	���fV�|	���|_n(|||	|��Ed{V��|s||jd}|��5t|t��r�|D]�}
|
dkr2t|j��dkr|j����:|
dkr&|j�
|jd���f|j�
|
����n�t|t��r;t|��t|j��kr|jdd�=nD|j|d�=n9|dkr&|j�
|jd��n
Jd	|z���||jd}n����	|j|jkrdS||jd
kr3dg|_|d}|jt d
fV�|xjdz
c_��/|jt"||jfV�|xjdz
c_n#t$$rYdSwxYw��n)z
        Split ``text`` into (tokentype, text) pairs.
        If ``context`` is given, use this lexer context instead.
        rr�r�r�Nr�r�FrrR)r�rr�rNr�r�r+r
r�rKr�r_r�rLr�rrrr�)rBrN�contextr�r�r	r
r�r�rr�s           r$rVz)ExtendedRegexLexer.get_tokens_unprocessed�s����
�L�	��	��t�Q�'�'�C�#�F�+�K�K��C�#�C�I�b�M�2�K��8�D�3	�/:�2
�2
�+��&�)��H�T�3�7�C�G�4�4���!��)���<�<�:�5�5�"%�'�6�1�7�7�9�9�"<�<�<�<�&'�e�e�g�g�C�G�G�'-�v�d�A�s�';�';�;�;�;�;�;�;�;�#,�G�.7��	�"�
�.F�� �,�%�i��7�7�L�)2�<�<��#(�F�?�?�'*�3�9�~�~��'9�'9�(+�	�
�
�����%*�g�%5�%5�$'�I�$4�$4�S�Y�r�]�$C�$C�$C�$C�$'�I�$4�$4�U�$;�$;�$;�$;�<�(�	�3�7�7�	L�"�9�~�~��S�Y���?�?�$'�I�a�b�b�M�M�$'�I�i�j�j�$9�$9�&�'�1�1��I�,�,�S�Y�r�]�;�;�;�;�K�*?�)�*K�K�K�K�&/��	�"�
�&>���E�C!�F
��w�#�'�)�)����C�G�}��,�,�%+�H��	�&/��&7��!�g�t�T�1�1�1�1����1���� ��'�5�$�s�w�-�7�7�7�7��G�G�q�L�G�G�G��!�����E�E�����e3	s�J�,AJ�0,J�
J+�*J+r)r1r2r3r4rVr"r&r$rr�s8��������@�@�@�@�@�@r&rc#�K�t|��}	t|��\}}n#t$r|Ed{V��YdSwxYwd}d}|D]�\}}}|�|}d}	|r�|t|��z|kr�||	||z
�}
|
r|||
fV�|t|
��z
}|D]\}}}
|||
fV�|t|
��z
}� ||z
}		t|��\}}n#t$rd}YnwxYw|r|t|��z|k��|	t|��kr$||||	d�fV�|t|��|	z
z
}��|rQ|pd}|D]\}}}|||fV�|t|��z
}� 	t|��\}}n#t$rd}YdSwxYw|�OdSdS)ag
    Helper for lexers which must combine the results of several
    sublexers.

    ``insertions`` is a list of ``(index, itokens)`` pairs.
    Each ``itokens`` iterable should be inserted at position
    ``index`` into the token stream given by the ``tokens``
    argument.

    The result is a combined token stream.

    TODO: clean up the code here.
    NTrF)�iter�next�
StopIterationr_)r�r�r�r��realpos�insleftr�rXrY�oldi�tmpval�it_index�it_token�it_value�ps               r$r�r�s{�����j�!�!�J���j�)�)���w�w���������������������
�G��G��%�%���1�a��?��G����
	�!�c�!�f�f�*��-�-��t�E�A�I�~�&�F��
'��q�&�(�(�(�(��3�v�;�;�&��07�
)�
)�,��(�H��x��1�1�1�1��3�x�=�=�(����1�9�D�
�!%�j�!1�!1���w�w�� �
�
�
�����
�����
	�!�c�!�f�f�*��-�-��#�a�&�&�=�=��1�a����h�&�&�&�&��s�1�v�v��}�$�G���
��,�Q���	�	�G�A�q�!��1�a�-�����s�1�v�v��G�G�	�!�*�-�-�N�E�7�7���	�	�	��G��E�E�	�����
�
�
�
�
s0�&�<�<�9C�C�C�E*�*E:�9E:c��eZdZdZd�ZdS)�ProfilingRegexLexerMetaz>Metaclass for ProfilingRegexLexer, collects regex timing info.c�������t|t��r"t|j|j|j����n|�tj�|���tjf����fd�	}|S)Nr�c����jd��
�	fddg��}tj��}��|||��}tj��}|dxxdz
cc<|dxx||z
z
cc<|S)Nr�rr!r�)�
_prof_data�
setdefault�timer�)rNr��endpos�info�t0�res�t1r��compiledr�r�s       ����r$�
match_funcz:ProfilingRegexLexerMeta._process_regex.<locals>.match_funcNs�����>�"�%�0�0�%����3�x�H�H�D�����B��.�.��s�F�3�3�C�����B���G�G�G�q�L�G�G�G���G�G�G�r�B�w��G�G�G��Jr&)	rKrrr�r�r�r��sys�maxsize)r�r�r�r�r/r.r�s`  ` @@r$r�z&ProfilingRegexLexerMeta._process_regexFs��������e�U�#�#�	��E�K���#(�<�1�1�1�C�C��C��:�c�6�*�*��),��	�	�	�	�	�	�	�	�	��r&N)r1r2r3r4r�r"r&r$r#r#Cs)������H�H�����r&r#c�"�eZdZdZgZdZdd�ZdS)�ProfilingRegexLexerzFDrop-in replacement for RegexLexer that does profiling of its regexes.�rc#�~�K��jj�i��t��||��Ed{V���jj���}t
d�|���D���fd�d���}td�|D����}t��td�jj
t|��|fz��td��tdd	z��td
��|D]}td|z���td��dS)Nc3�K�|]Y\\}}\}}|t|���d���dd��dd�|d|zd|z|zfV��ZdS)zu'z\\�\N�Ai�)�reprrbrQ)�.0r��r�nrXs     r$�	<genexpr>z=ProfilingRegexLexer.get_tokens_unprocessed.<locals>.<genexpr>ds�����@�@�+�F�Q��F�Q���4��7�7�=�=��/�/�7�7���E�E�c�r�c�J��4�!�8�T�A�X��\�3�@�@�@�@�@�@r&c���|�jSrU)�_prof_sort_index)r#rBs �r$r%z<ProfilingRegexLexer.get_tokens_unprocessed.<locals>.<lambda>gs���A�d�&;�$<�r&T)�key�reversec3�&K�|]}|dV��
dS)�Nr")r:r#s  r$r=z=ProfilingRegexLexer.get_tokens_unprocessed.<locals>.<genexpr>is&����+�+���!��+�+�+�+�+�+r&z2Profiling result for %s lexing %d chars in %.3f mszn==============================================================================================================z$%-20s %-64s ncalls  tottime  percall)r�r�zn--------------------------------------------------------------------------------------------------------------z%-20s %-65s %5d %8.4f %8.4f)rGr&rLrrVr��sortedr��sum�printr1r_)rBrNr��rawdatar��	sum_totalr0s`      r$rVz*ProfilingRegexLexer.get_tokens_unprocessed_so�������!�(�(��,�,�,��4�4�T�4��G�G�G�G�G�G�G�G�G��.�+�/�/�1�1���@�@�/6�}�}���@�@�@�=�<�<�<�"�	$�$�$��
�+�+�d�+�+�+�+�+�	�
����
�B��~�&��D�	�	�9�=�>�	?�	?�	?�
�i����
�4�7I�I�J�J�J�
�i�����	5�	5�A��/�!�3�4�4�4�4�
�i�����r&Nr)r1r2r3r4r&r?rVr"r&r$r3r3Ys9������P�P��J��������r&r3)6r4r�r0r(�pygments.filterrr�pygments.filtersr�pygments.tokenrrrr	r
�
pygments.utilrrr
rrr�pygments.regexoptr�__all__r�rr]�staticmethod�_default_analyser+r(rrr[rr�rr�r�r�rr�rrrrr�rrrr�r#r3r"r&r$�<module>rQs_����
�	�	�	�
�
�
�
�����1�1�1�1�1�1�1�1�/�/�/�/�/�/�E�E�E�E�E�E�E�E�E�E�E�E�E�E�*�*�*�*�*�*�*�*�*�*�*�*�*�*�*�*�'�'�'�'�'�'�*�*�*���"�*�W�
�
��,�,�,�
� �<�
�
�.�.��	1�	1�	1�	1�	1��	1�	1�	1�]"�]"�]"�]"�]"�i�]"�]"�]"�]"�@O�O�O�O�O�e�O�O�O�N	�	�	�	�	�c�	�	�	����������(�*�*��

�

�

�

�

�u�

�

�

���������6���4��������
�u�w�w��/�/�/�d	�	�	�	�	�	�	�	�
M�
M�
M�
M�
M�F�
M�
M�
M� e1�e1�e1�e1�e1�Y�e1�e1�e1�P^�^�^�^�^��.�^�^�^�^�B
-�
-�
-�
-�
-�
-�
-�
-� E�E�E�E�E��E�E�E�P=�=�=�@�����n����,�����*�0G������r&