%PDF-1.3
%
1 0 obj
<>]/Pages 3 0 R/Type/Catalog/ViewerPreferences<>>>
endobj
2 0 obj
<>stream
2020-12-17T11:06:26-05:00
2020-12-17T11:06:30-05:00
2020-12-17T11:06:30-05:00
Adobe InDesign 16.0 (Macintosh)
uuid:e370d8f2-1c24-3242-8062-f49fecc93a72
xmp.did:AA11739B4934E011AF41B9C40A4D08B7
xmp.id:9065a4f7-4ec2-472c-b959-648ce7e121f8
proof:pdf
1
xmp.iid:c9de4819-af3f-4857-bc5f-f966f717a3fd
xmp.did:7c80f036-d2af-49dd-ad82-55cd3e371aee
xmp.did:AA11739B4934E011AF41B9C40A4D08B7
default
converted
from application/x-indesign to application/pdf
Adobe InDesign 16.0 (Macintosh)
/
2020-12-17T11:06:26-05:00
application/pdf
20-110102 - 15500-0635 - H1 2021 - LM - Oticon More Clinical Evidence Whitepaper_final.indd
Adobe PDF Library 15.0
False
PDF/X-1:2001
PDF/X-1:2001
PDF/X-1a:2001
Windows
Whitepaper – 2020 – Headline from frontpage
page Whitepaper – 2020 – Headline from frontpage
Figure 1. Four objective and subjective outcomes were used to investigate three levels of brain processing of sound.
Making sense of sound – Evidence at
three levels
Making sense of sound requires our sensory, cognitive, and social skills to constantly work together, so that we can decide about our actions, communicate with others, and react to what is happening around us (Pichora-Fuller et al., 2017; Meyer et al., 2016). Hearing loss challenges this fine balance of skills due to changes in the sensory input received by the brain. The restoration of this sensory input with hearing aids should ideally restore the neural activity patterns sent to the brain (Lesica, 2018), so that cognitive resources are not fully dedicated to effortful processing of a degraded neural code, but remain available for other important functions such as storing what was heard in memory (Rönnberg et al., 2013). Recent research findings investigating processing stages in the auditory cortex (O’Sullivan et al., 2019; Hausfeld et al., 2018; Puvvada & Simon, 2017) have shown that the brain first represents all elements of the incoming sound scene in primary-like cortical areas (Figure 1, point A) and creates contrast between sounds that carry information, referred to here as the foreground of the sound scene, and sounds that don’t in the background (Figure 1, point B). It can then use selective attention to focus on specific foreground sounds of interest that become enhanced compared to non-attended sounds in higher-order cortical areas (Figure 1, point C). Based on these new scientific insights (for more details, see Man & Ng, 2020), the audiology of Oticon More is designed to process sound in a way that provides the brain better access to the full sound scene, to make important sounds in the foreground stand out from the background, and to amplify this balanced sound scene in detail, so that the user can better focus on the sounds of interest, and thus better understand and remember them (Santurette & Behrens, 2020). Such clinical benefits of Oticon More were investigated in four studies described below, covering three essential levels of brain processing, hereafter referred to as orient, focus, and recognize (see Figure 1):
Brain responses (EEG): Brain representations of sound in the auditory cortex when using Oticon More were investigated with electroencephalography (EEG) to test how clear the full sound scene and sounds in the foreground were in early cortical processing stages (orient), and how clear individual sounds were in higher-order processing stages (focus);
Ability to understand speech in focus: The ability to understand the talker in focus when several people speak at the same time was investigated in both a simple and a complex listening environment with Oticon More;
Speech understanding in noise: A standard speech-in-noise test was carried out to compare speech understanding performance when using Oticon More and Oticon Opn S;
Memory recall: A dual-task paradigm was used to study how well listeners could remember speech with Oticon More compared to Oticon Opn S.
Brain responses (EEG)
When we talk about “paying attention”, our first instinct is that somewhere somehow the object of interest (for instance a given sound) has to be more “apparent” to us compared to everything else – as if an internal hierarchy exists in our minds, ranking everything according to how relevant the different elements of a scene are to our current goals. This can certainly be achieved in many tasks, for example an artist being fully absorbed in their work, shutting out distractions to dedicate their focus to their craft. This seems rather intuitive to think about and is generally achieved with ease in our everyday lives. We also know that, as hearing loss degrades the fidelity of the auditory signal, it poses a significant challenge for a person with hearing loss to apply selective attention (Shinn-Cunningham & Best, 2008).
A previous study investigated how OpenSound Navigator (OSN) in Oticon Opn S helped with selective attention by measuring neural representations of speech via EEG (Alickovic et al., 2020; Ng & Man, 2019). However, recent research has pointed towards a hierarchical processing of sounds, i.e., with different stages, during selective attention (O’Sullivan et al, 2019; Puvvada & Simon, 2017). From there, we now know that the brain uses a multistage process, that can be described as orient and focus, where the fidelity of one stage influences the ease of the following (see Man & Ng, 2020, for an overview). For this reason, to extend our findings on selective attention to a more detailed level, we used a new EEG analysis method to investigate how MoreSound Intelligence™ (MSI) in Oticon More affected these two critical orient and focus steps, using a similar setup to Ng & Man (2019), schematized in Figure 2.
Thirty-one experienced hearing aid users (mean age: 65.6 years) with stable, bilateral, sensorineural hearing loss ranging from mild to moderately-severe were recruited to perform this experiment. We compared brain responses obtained with MSI in More to the Opn S algorithm from our previous study, OSN. Our focus was therefore to find out how the two hearing aids compared to each other for the two steps, orient and focus, by analyzing early and late EEG responses, respectively. Where Ng and Man (2019) analysed only late (focus) responses to the talker in focus (chosen between F1 and F2 in Figure 2), the secondary talker (F2 or F1), and the background noise (B1 + B2 + B3 + B4), here we also investigated early (orient) responses to the full sound scene (F1 + F2 + B1 + B2 + B3 + B4) and to sounds in the foreground (F1 + F2). These two components in the orient stage of brain processing are critical as they provide the necessary details for the focus stage to then process the attended talker and the secondary talker.
The results of the study are shown in Figure 3. We can start off by analysing the two stages individually:
Early EEG responses – Orient (Figure 3, left panel): The full sound scene refers to the combination of all objects in the environment (Figure 2, F1 + F2 + B1 + B2 + B3 + B4), while the foreground refers to the combination of the two possible talkers which the listener may attend to (F1 + F2). These two are critical to the listening experience of the listener as the former contributes to the awareness of the listener in their environment while the latter affects their ability to switch attention. In Figure 3, it can be seen that the brain’s ability to track all the objects in the full sound scene, as measured by the strength of early EEG responses, improves by 60% with MSI enabled compared to disabled (p < 0.001). Importantly, MSI in More also allows 30% better access to the full sound scene compared to OSN in Opn S (p = 0.011). In the foreground, MSI improves the brain’s tracking of the 2 combined talkers by 45% and 20% compared to MSI off (p < 0.001) and OSN on (p = 0.024), respectively.
Late EEG responses – Focus (Figure 3, right panel): At this later stage, it is critical during communication for the listener to selectively attend to the talker in focus, whilst maintaining a low but acceptable level of tracking of the secondary talker to allow switching attention. This was demonstrated with MSI for the talker in focus, for which the strength of tracking in late EEG responses improved by 5% with MSI on compared to both MSI off (p = 0.044) and OSN on (p = 0.010). For the secondary talker, MSI improved the tracking by 30% compared to MSI off (p = 0.002).
To summarise these findings, MSI was shown to improve the brain’s ability to track the different objects in the user’s surrounding environment. This was demonstrated in both critical steps supporting the brain’s perception of sound – orient and focus.
Ability to understand speech in focus
Are the above improvements in the brain representation of speech with Oticon More also reflected in the behavioral performance of users in multi-talker situations? In order to test this, we measured the ability of users to understand one talker in focus in the presence of two competing talkers, using an adaptation of the competing digits test developed by Best et al. (2018). Just like in the EEG experiment described above, this test uses a speech-on-speech task that requires selectively attending to one of several simultaneous speech sources.
Thirty-four experienced hearing-aid users (mean age: 63 years), all with stable, sensorineural, bilateral hearing losses ranging from slight to moderately-severe (4-frequency pure-tone-average (PTA) range: 19-68 dB HL, mean: 40.3 dB HL), participated in the experiment. They were seated in the center of a loudspeaker array and listened to three simultaneous digit sequences spoken by different female talkers located at -30°, 0°, and +30° at a level of 65 dB SPL. Each sequence contained four digits in each trial and an acoustic location cue, spoken by a male voice at 0° just before the first digit, indicated which talker to focus on (“left”, “centre”, or “right”). The task of the participants was to repeat only the four digits spoken by the talker in focus and ignore the digits spoken by the competing talkers. The task was performed in a complex environment, with 4-talker babble noise played from each of three loudspeakers at -100°, 180°, and +100° and with an overall level of 70 dB SPL, and in a simple environment without background noise. Each of the participants performed the task with both Opn S and More hearing aids fitted using the VAC+ rationale and the order of test conditions was randomized.
The left graph in Figure 4 shows the percentage of correctly identified digits for the talker in focus in the tested complex environment. For Oticon More, the results showed significantly higher recognition of the digits in focus when MSI was active compared to when it was inactive (p < 0.001), corresponding to a relative improvement of 15%. Performance with Oticon More and MSI on was also significantly higher than with Opn S and OSN on by 5% on average (p = 0.014). These behavioral results are consistent with the above EEG results showing increased brain representation of the talker in focus for MSI on vs off and for More vs Opn S with MSI and OSN on, respectively.
The right graph in Figure 4 shows the results in the same task in the tested simple environment. Without background noise, overall performance is higher than in the complex environment but remains significantly higher for More than for Opn S by 5% on average (p = 0.025). Such an improvement indicates that, combined together, the increased 24-channel resolution of the Polaris platform, the action of the MoreSound Amplifier (MSA), and the new Virtual Outer Ear in More provide a speech-on-speech benefit to users also in simple environments.
Taken together, these findings show that Oticon More improves the ability of users to understand the talker in focus both in simple and complex environments.
Speech understanding in noise
To investigate speech understanding improvements in Oticon More, a study was performed in Copenhagen, Denmark. Eighteen listeners with hearing loss within the 85-dB speaker level range were recruited, with an average age of 68.5 years (range 52-77 years). The listeners were recruited to perform the standardized Danish speech intelligibility in noise test, Dantale II (Wagener et al., 2003). The purpose of this test was to measure speech recognition in Opn S and More, where both the default fitting settings and personalized fitting settings were compared. These three fitting conditions corresponded to the default prescribed profile for OSN versus the default profile for MSI, and the two other conditions corresponded to personalized fittings providing either less or more help from the OSN and the MSI features in complex sound environments. In this way, evidence was gathered both for the most standard situation for hearing aid users, as well as for situations that required more complex and personalized settings.
In the test, matrix sentences in Danish were presented by a female speaker to the front (0°) while masker signals consisting of an International Speech Test Signal (22°) and three unmodulated signals (+/- 112° and 180°) were presented simultaneously. The test was performed adaptively towards a 70% correct speech recognition threshold (SRT). The speech was initially presented at 72 dB SPL and the maskers at 67 dB SPL.
Results are illustrated in Figure 5. They showed a significant difference for all conditions, where Oticon More improved the SRT significantly for the test participants. For the two default settings, the average SRT differences between More and Opn S was 1.2 dB (p < 0.001); for the personalized setting providing less help, the difference was 1.5 dB (p < 0.001); for the personalizes setting providing more help, the difference was 0.7 dB (p < 0.04).
SRTs in dB signal-to-noise ratio (SNR) can be converted to speech understanding in percent by fitting the data to a psychometric function. According to Wagener et al. (2003), the slope for hearing-impaired listeners for the Dantale II test is 13.2%/dB, but this standard is based on an SRT of 50%, whereas this test was performed at 70%. Taking this difference into account, together with the minor variations in the noise type that was used in this test compared to the reference, a slope of 12%/dB was selected to make sure the conversion was ecologically reliable. Using this slope, Oticon More showed a 15% improved speech understanding compared to Opn S for the default setting that is most commonly prescribed in the clinics. For more help from OSN versus MSI, Oticon More showed an improvement of 8%, and for less help, an even higher speech understanding improvement – 18% – which shows that the fitting handles provided in the MSI feature facilitate better speech understanding even further when the sound environment gets more complex.
Memory recall
We have consistently demonstrated that our BrainHearing™ technology frees up cognitive resources and facilitates the cognitive processing of speech using a memory recall test, which is known as the Sentence-final Word Identification and Recall test (Ng et al., 2013). In our previous reports, we documented improvement in memory recall performance using Oticon Opn (Le Goff et al., 2016), Opn S (Juul Jensen, 2019), and Xceed (Ng & Skagerstrand, 2019) even when speech is highly intelligible. In this study, we investigated whether More would result in better recall performance compared to Opn S.
Twenty-five participants with mild to moderate hearing loss (with average 4-frequency PTA of 48.5 dB HL and average age of 58.8 years) were recruited. The test setup of the SWIR test used in this study was similar to that in our previous studies, where target speech came from the front and noise from the background. Please refer to the previous whitepapers for detail. In the test, the target sentences were from the Danish Hearing In Noise Test (Nielsen & Dau, 2011). The participants were asked to 1) repeat the last word after listening to each sentence, and after listening to a list of seven sentences, 2) recall, in any order, as many of the last words in the list as possible. Background noise, which was fixed at 70 dB SPL, was a 16-talker babble constructed by noise coming from four loudspeakers, each presenting a 4-talker babble. The presentation level for each participant was individualized, which is equivalent to 95% speech intelligibility using Opn S (the average presentation level was +7.0 dB SNR).
Long-term recall (memory recall for the last words from sentences 1 and 2) and short-term recall (memory recall for the last words from sentences 6 and 7) were analysed. Overall, More resulted in better long-term recall compared to Opn S (p < 0.05; see Figure 6). This corresponds to approximately 16% better long-term memory recall. Better recall from long-term memory is associated with more cognitive resources available for better encoding of speech into the memory. There was no difference in the short-term recall between More and Opn S.
In recent years, the number of studies investigating the topic of listening effort has been growing tremendously. In the literature, listening effort can be measured through functional brain imaging (e.g., EEG), reflected in physiological responses outside the brain (e.g., pupillometry), and frequently result in measurable differences in behavioral performance (e.g., memory recall) – see Peelle (2018) for a review. Our results show that More frees up more cognitive resources and hence improves recall performance, which can be interpreted as More reducing listening effort compared to Opn S.
Conclusion
The above studies provide evidence for the following BrainHearing benefits of Oticon More:
The full sound scene is 60% clearer as it enters the brain with MoreSound Intelligence, an improvement of 30% compared to Oticon Opn S.
The foreground passed on from the Orient to the Focus subsystems in the brain’s hearing centre is clearer.
The sounds in focus as well as secondary sounds of interest are stronger in the Focus subsystem, making it easier to focus and providing a better basis for switching focus.
Speech understanding for the talker in focus in multi-talker situations is improved in both complex and simple environments.
Speech understanding in noise is further improved by 15% compared to Opn S.
Oticon More leads to better recall for long-term memory than Opn S, indicating reduced listening effort for the user.
whitepaper
2020
Sébastien Santurette, Elaine Hoi Ning Ng,
Josefine Juul Jensen, and Brian Man Kai Loong
Centre for Applied Audiology Research, Oticon A/S
AUTHORS
Oticon is part of the Demant Group.
69620UK / 2020.11.20 / v1
www.oticon.global
References
Alickovic, E., Lunner, T., Wendt, D., Fiedler, L., Hietkamp, R., Ng, E. H. N., & Graversen, C. (2020). Neural representation enhanced for speech and reduced for background noise with a hearing aid noise reduction scheme during a selective attention task. Frontiers in neuroscience, 14, 846.
Best, V., Swaminathan, J., Kopčo, N., Roverud, E., & Shinn-Cunningham, B. (2018). A “Buildup” of Speech Intelligibility in Listeners With Normal Hearing and Hearing Loss. Trends in Hearing, 22, 2331216518807519.
Brændgaard, M. (2020a). MoreSound Intelligence™. Oticon Tech Paper.
Brændgaard, M. (2020b). The Polaris platform. Oticon Tech Paper.
Hausfeld, L., Riecke, L., Valente, G., & Formisano, E. (2018). Cortical tracking of multiple streams outside the focus of attention in naturalistic auditory scenes. NeuroImage, 181, 617-626.
Juul Jensen, J. (2019). Oticon Opn S clinical evidence. Oticon Whitepaper.
Man K. L., B., & H. N. Ng, E. (2020). BrainHearing™ – The new perspective. Oticon Whitepaper.
Meyer, C., Grenness, C., Scarinci, N., & Hickson, L. (2016). What is the international classification of functioning, disability and health and why is it relevant to audiology? In Seminars in Hearing (Vol. 37, No. 03, pp. 163-186). Thieme Medical Publishers.
Le Goff, N., Wendt, D., Lunner, T., & Ng, E. (2016). Opn clinical evidence. Oticon Whitepaper.
Lesica, N. A. (2018). Why do hearing aids fail to restore normal auditory perception? Trends in neurosciences, 41(4), 174-185.
Ng, E. H. N., Rudner, M., Lunner, T., Pedersen, M. S., & Rönnberg, J. (2013). Effects of noise and working memory capacity on memory processing of speech for hearing-aid users. International Journal of Audiology, 52(7), 433-441.
Ng, E. H. N, & Man K. L., B. (2019). Enhancing selective attention: Oticon Opn S™ new evidence. Oticon Whitepaper.
Ng, E. H. N, & Skagerstrand, Å. (2019). Oticon Xceed™ clinical evidence. Oticon Whitepaper.
Nielsen, J. B., & Dau, T. (2011). The Danish hearing in noise test. International journal of audiology, 50(3), 202-208.
Peelle, J. E. (2018). Listening effort: How the cognitive consequences of acoustic challenge are reflected in brain and behavior. Ear and hearing, 39(2), 204.
O’Sullivan, J., Herrero, J., Smith, E., Schevon, C., McKhann, G. M., Sheth, S. A., ... & Mesgarani, N. (2019). Hierarchical Encoding of Attended Auditory Objects in Multi-talker Speech Perception. Neuron, 104(6), 1195-1209.
Pichora-Fuller, M. K., Alain, C., & Schneider, B. A. (2017). Older adults at the cocktail party. In The auditory system at the cocktail party (pp. 227-259). Springer, Cham.
Puvvada, K. C., & Simon, J. Z. (2017). Cortical representations of speech in a multitalker auditory scene. Journal of Neuroscience, 37(38), 9189-9196.
Rönnberg, J., Lunner, T., Zekveld, A., Sörqvist, P., Danielsson, H., Lyxell, B., ... & Rudner, M. (2013). The Ease of Language Understanding (ELU) model: theoretical, empirical, and clinical advances. Frontiers in systems neuroscience, 7, 31.
Santurette, S., & Behrens, T. (2020). The audiology of Oticon More. Oticon Whitepaper.
Shinn-Cunningham, B. G., & Best, V. (2008). Selective attention in normal and impaired hearing. Trends in amplification, 12(4), 283-299.
Wagener, K., Josvassen, J. L., & Ardenkjær, R. (2003) Design, optimization and evaluation of a Danish sentence test in noise: Diseño, optimización y evaluación de la prueba Danesa de frases en ruido, International Journal of Audiology, 42:1, 10-17.
Oticon More™
clinical evidence
Abstract
This whitepaper presents the results of four research studies carried out with Oticon More™, providing clinical evidence for BrainHearing™ benefits of More for the ability of the brain to orient, focus, and recognize.
Using a novel analysis method of brain responses measured via electroencephalography (EEG), we show that the MoreSound Intelligence™ (MSI) feature in More leads to a clearer representation of the full sound scene in the brain, as well as clearer sounds in the foreground and better focus on the sounds of interest, surpassing what is achieved with Oticon Opn S.
Such improvements translate into a better ability to understand the talker in focus in multi-talker situations in both simple and complex environments when using More. Measures of speech understanding in noise and memory recall also show significantly improved speech recognition and long-term memory recall with More compared to Opn S, demonstrating further benefits of More for cognition, with more successful and less effortful listening.
page Whitepaper – 2020 – Oticon More™ clinical evidence
page Whitepaper – 2020 – Oticon More™ clinical evidence
page Whitepaper – 2020 – Oticon More™ clinical evidence
page Whitepaper – 2020 – Oticon More™ clinical evidence
page Whitepaper – 2020 – Oticon More™ clinical evidence
Figure 2. Foreground (F1, F2) and background (B1, B2, B3, B4) sounds in the EEG setup. F1 and F2 contained a male and a female talker reading excerpts from an audiobook, with each talker at 73 dB SPL. Each background sound was a 4-talker babble and the overall level of the background was 70 dB SPL.
Figure 3. Strength of EEG responses. Left: Early EEG responses to the full sound scene (Orient stage A in Figure 1),
to the foreground (Orient stage B in Figure 1). Right: Late EEG responses to the talker in focus and to the secondary talker (Focus stage C in Figure 1). Error bars show standard error of the mean.
Figure 4. Ability to understand the talker in focus in a complex (left graph) and simple (right graph) multi-talker environment for More and Opn S. Error bars show standard error of the mean.
page Whitepaper – 2020 – Oticon More™ clinical evidence
Figure 5. Speech reception thresholds (dB SNR) for More and Opn S for default fitting settings and personalized settings providing less or more help in complex environments. Error bars show standard error of the mean.
page Whitepaper – 2020 – Oticon More™ clinical evidence
Figure 6. Long-term and short-term memory recall results for Opn S and More. Error bars show standard error of the mean.
page Whitepaper – 2020 – Oticon More™ clinical evidence
Focus
Orient
Recognize
A
B
C
Brain responses (EEG)
Ability to understand speech in focus
Speech understanding
Memory recall
More
MSI OFF
More
MSI ON
Opn S
OSN ON
More
MSI OFF
More
MSI ON
Opn S
OSN ON
More
MSI OFF
More
MSI OFF
More
MSI ON
More
MSI ON
Opn S
OSN ON
Opn S
OSN ON
Early EEG responses
(Orient)
Late EEG responses
(Focus)
Low
0.005
Low
0.01
0.015
0.02
0.025
0.03
0.035
0.04
0.045
0.05
High
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
0.09
0.10
0.11
0.12
High
+60%
+30%
+45%
+20%
+5%
+5%
+30%
Full sound scene
Foreground
Talker in focus
Secondary talker
A
B
C
More
MSI OFF
Low
10
20
30
40
50
60
70
80
90
100
High
More
MSI ON
Opn S
OSN ON
Complex enviroment
Simple enviroment
More
Opn S
Low
+15%
+5%
+5%
10
20
30
40
50
60
70
80
90
100
High
Speech
understanding (%)
Speech
understanding (%)
No noise
More
0
Opn S
Speech understanding in noise
-5
-4
-3
-2
-1
Default fitting
Less help
More help
More
0
100
80
60
40
20
Opn S
Sentences 1 and 2
Long-term recall
Sentences 6 and 7
Short-term recall
+16%
Strength of
EEG signal
Strength of
EEG signal
Speech reception threshold (dB SNR)
***
***
*
% correctly recalled last words
INTERNAL USE ONLY
INTERNAL USE ONLY
INTERNAL USE ONLY
INTERNAL USE ONLY
endstream
endobj
3 0 obj
<>
endobj
15 0 obj
<>
endobj
16 0 obj
<>
endobj
10 0 obj
</LastModified/NumberOfPageItemsInPage 1/NumberofPages 1/OriginalDocumentID/PageItemUIDToLocationDataMap<0[293.0 0.0 3.0 19.8425 -357.165 575.433 -345.827 1.0 0.0 0.0 1.0 101.888 -356.424]>>/PageTransformationMatrixList<0[1.0 0.0 0.0 1.0 0.0 0.0]>>/PageUIDList<0 1697>>/PageWidthList<0 612.0>>>>>>/Resources<>/Font<>/ProcSet[/PDF/Text]/Shading<>>>/TrimBox[0.0 0.0 612.0 792.0]/Type/Page>>
endobj
11 0 obj
</LastModified/NumberOfPageItemsInPage 1/NumberofPages 1/OriginalDocumentID/PageItemUIDToLocationDataMap<0[293.0 0.0 3.0 19.8425 -357.165 575.433 -345.827 1.0 0.0 0.0 1.0 101.888 -356.424]>>/PageTransformationMatrixList<0[1.0 0.0 0.0 1.0 0.0 0.0]>>/PageUIDList<0 1698>>/PageWidthList<0 612.0>>>>>>/Resources<>/Font<>/ProcSet[/PDF/Text]/Shading<>>>/TrimBox[0.0 0.0 612.0 792.0]/Type/Page>>
endobj
12 0 obj
</LastModified/NumberOfPageItemsInPage 1/NumberofPages 1/OriginalDocumentID/PageItemUIDToLocationDataMap<0[293.0 0.0 3.0 19.8425 -357.165 575.433 -345.827 1.0 0.0 0.0 1.0 101.888 -356.424]>>/PageTransformationMatrixList<0[1.0 0.0 0.0 1.0 0.0 0.0]>>/PageUIDList<0 1699>>/PageWidthList<0 612.0>>>>>>/Resources<>/Font<>/ProcSet[/PDF/Text]/Shading<>>>/TrimBox[0.0 0.0 612.0 792.0]/Type/Page>>
endobj
13 0 obj
</LastModified/NumberOfPageItemsInPage 2/NumberofPages 1/OriginalDocumentID/PageItemUIDToLocationDataMap<0[293.0 0.0 3.0 19.8425 -357.165 575.433 -345.827 1.0 0.0 0.0 1.0 101.888 -356.424]/1[316.0 1.0 3.0 -583.654 -357.165 -28.063 -345.827 1.0 0.0 0.0 1.0 -501.608 -356.424]>>/PageTransformationMatrixList<0[1.0 0.0 0.0 1.0 0.0 0.0]>>/PageUIDList<0 10300>>/PageWidthList<0 612.0>>>>>>/Resources<>/Font<>/ProcSet[/PDF/Text]/Shading<>>>/TrimBox[0.0 0.0 612.0 792.0]/Type/Page>>
endobj
17 0 obj
</LastModified/NumberOfPageItemsInPage 2/NumberofPages 1/OriginalDocumentID/PageItemUIDToLocationDataMap<0[293.0 0.0 3.0 19.8425 -357.165 575.433 -345.827 1.0 0.0 0.0 1.0 101.888 -356.424]/1[316.0 1.0 3.0 -583.654 -357.165 -28.063 -345.827 1.0 0.0 0.0 1.0 -501.608 -356.424]>>/PageTransformationMatrixList<0[1.0 0.0 0.0 1.0 0.0 0.0]>>/PageUIDList<0 10515>>/PageWidthList<0 612.0>>>>>>/Resources<>/Font<>/ProcSet[/PDF/Text]/Shading<>>>/TrimBox[0.0 0.0 612.0 792.0]/Type/Page>>
endobj
18 0 obj
</LastModified/NumberOfPageItemsInPage 2/NumberofPages 1/OriginalDocumentID/PageItemUIDToLocationDataMap<0[293.0 0.0 3.0 19.8425 -357.165 575.433 -345.827 1.0 0.0 0.0 1.0 101.888 -356.424]/1[316.0 1.0 3.0 -583.654 -357.165 -28.063 -345.827 1.0 0.0 0.0 1.0 -501.608 -356.424]>>/PageTransformationMatrixList<0[1.0 0.0 0.0 1.0 0.0 0.0]>>/PageUIDList<0 10527>>/PageWidthList<0 612.0>>>>>>/Resources<>/Font<>/ProcSet[/PDF/Text]/Shading<>>>/TrimBox[0.0 0.0 612.0 792.0]/Type/Page>>
endobj
19 0 obj
</LastModified/NumberofPages 1/OriginalDocumentID/PageTransformationMatrixList<0[1.0 0.0 0.0 1.0 0.0 0.0]>>/PageUIDList<0 1019>>/PageWidthList<0 612.0>>>>>>/Resources<>/Font<>/ProcSet[/PDF/Text]/Properties<>>>/TrimBox[0.0 0.0 612.0 792.0]/Type/Page>>
endobj
35 0 obj
<>stream
HWˮ%
ܟ8=
'cLjRERcv^UKHV
{[mt*qk#nt_?}waӟnaߧo>.!Y|˻d~AC aK{Go06^˩1%aSX
X{X~=nK[|]-*{;>%VXO1WmQCI>YW{/~3S]viy ̋+W̓}ee'D̫_\;ta̟{gͯ2.cn%*9ǭ/!ފQ&8xpGbS\Xy}0[d0ro> Y>\Q[FOJW@W2Tjr#wH-}䷅w}sFl=GA=lA2Ͽo@))$ģymX`s'wyHLtGXĨA60eGZn
iUYBrڈ"]=HS6p{{qڽo(0Νu$hq- L͝CN<<>Dv=fuL)n146m_O[_`)c#D'~c6rsՌp AŽ<CT4&F/Qq@s$P):NBEC7u,#yDP{ I$jUqͬb1 sԁikLh,洰G䯪-^46R1kwʁXLMHeRmla,FPRYhC| utF!@ ܍oAt `':fg+SD56_
F="2$Rtc6*DSk*u[R
YMǴ&4J
Usy
ݼ`\WDA/]@9=\Ј!RW<'mWyfi(5 ~}@ȝdT8K ֑O =ϐ5T~Y@jz
RZ jKQ `G2NEtM2(FWnNu9&.͔q&+@CAޒ;[*e|D?qu$F m^AzF-iN*jnrk9hV1y"V2nd2S
W/<n#YS(N@1VWMWRo&K>ó'6kjSND!_h*k,2,,jj̋đ]&BRFșk_O lO c,:jfjm:NxjN -/.j
kuAtF4]0LcBPx[ X`pN25d#,y8g*uEQbkq~}{4lrPw<Ȑ#.|7?p's~phH;mjsbؙ:zMG1ք ^n3HUUˢ*G5\-t>7T4F"QQCNia٥nJb+|XUG:T[Vj1%/S5x]*QAtBzno @9&1$HLy"1
*3aȒ9buSЫ]5-3ss-:gO#X_J#[X[l3)_f։4ycV![aţ" c͵ת>=1.eYX3JKeٷGlS#O-pz{Ϳ~eJ]D26Y{LuA+B,+s^RhM'xGDBO0\X!ʴ0paU>ɺTyYJC^L
!F{n*5`:zWE bBkX48h֞A6(V];``TW(Mker4
YR=e{y
,D.QuQ6~20X>U.O<8&3-1"(QsC-Zpy^8\+D"&M(
iv6,>c4.VgP7vx?\'x@=ѐI-@%S+דz1o&*+g_jEi. Y%{g\OkRmhO:[zPSizǯf+>v
)_~[1O
9r
jrD &1&!4mEdVW6)H0KwFV^/;]n;t
-]_ZH9,&o^||>B
}LFNdyNd#Ț˒Z4ޞfܡ+M!NF{"#LOM=%Po>4z