Category Archives: Open Source

Open Source, Free Software, and similar

the Apple curl security incident 12604

tldr: Apple thinks it is fine. I do not.

On December 28 2023, bugreport 12604 was filed in the curl issue tracker. We get a lot issues filed most days so this fact alone was hardly anything out of the ordinary. We read the reports, investigate, ask follow-up questions to see what we can learn and what we need to address.

The title stated of the problem in this case was quite clear: flag –cacert behavior isn’t consistent between macOS and Linux, and it was filed by Yuedong Wu.

The friendly reporter showed how the curl version bundled with macOS behaves differently than curl binaries built entirely from open source. Even when running the same curl version on the same macOS machine.

The curl command line option --cacert provides a way for the user to say to curl that this is the exact set of CA certificates to trust when doing the following transfer. If the TLS server cannot provide a certificate that can be verified with that set of certificates, it should fail and return error.

This particular behavior and functionality in curl has been established since many years (this option was added to curl in December 2000) and of course is provided to allow users to know that it communicates with a known and trusted server. A pretty fundamental part of what TLS does really.

When this command line option is used with curl on macOS, the version shipped by Apple, it seems to fall back and checks the system CA store in case the provided set of CA certs fail the verification. A secondary check that was not asked for, is not documented and plain frankly comes completely by surprise. Therefore, when a user runs the check with a trimmed and dedicated CA cert file, it will not fail if the system CA store contains a cert that can verify the server!

This is a security problem because now suddenly certificate checks pass that should not pass.

I reported this as a security problem in an email sent to Product Security at Apple on December 29 2023, 08:30 UTC. It’s not a major problem, but it is an issue.

Apple’s says it is fine

On March 8, 2024 Apple Product Security responded with their wisdom:

Hello,

Thank you again for reporting this to us and allowing us time to investigate.

Apple’s version of OpenSSL (LibreSSL) intentionally uses the built-in system trust store as a default source of trust. Because the server certificate can be validated successfully using the built-in system trust store, we don't consider this something that needs to be addressed in our platforms.

Best regards,
KC
Apple Product Security

Case closed.

I disagree

Obviously I think differently. This undocumented feature makes CA cert verification with curl on macOS totally unreliable and inconsistent with documentation. It tricks users.

Be aware.

Since this is not a security vulnerability in the curl version we ship, we have not issued a CVE or anything for this problem. The problem is strictly speaking not even in curl code. It comes with the version of LibreSSL that Apple ships and builds curl to use on their platforms.

Discussion

hacker news

curl’s built-in manual without nroff

On December 14 1998 we released curl 5.2.

The project was still early back then and lots of things had not settled yet. In that release, which came only two weeks after 5.1, we introduced the --manual option, or -M for short.

Long before I started working on curl I learnt to value and appreciate Unix manpages. I more or less learned C programming using them, and I certainly learned my first ways around Unix shells and command lines reading manpages. My first Unix I spent a lot of time on was AIX. It was in the early 1990s, several years before I first used Linux.

Since some systems don’t have the fine concept of manpages, I decided I would help those users by bundling the curl man page into the tool itself. You can ask curl to show the curl manpage, with the -M option. The entire thing, looking very similar – mostly just lacking font details such as bold, italics and underline.

How do you bundle a manpage?

I suppose there are many ways to go about to make such a thing happen. In our case, we were already making and shipping a manpage in the nroff manpage format so it became a question of generating a text version using that page as a source and then convert the text version into C source code.

Converting a manpage to text was done with nroff. nroff is an ancient Unix tool that has been around for a long time and it existed on virtually every Unix flavor already back then. It seemed like a no-brainer to go with that, so that is what the curl build system would use.

Once the build scripts were tweaked it continued to just work. It became problematic only on platforms that lacked nroff – but to help smooth over that obstacle we also shipped the generated source file in distribution tarballs.

nroff really?

nroff is quirky tool. It generates the output differently based on environment details and over the years it would also subtly change its output several times that forced us to adjust the scripts as well.

Still, for as long as the curl manpage was primarily written in nroff format, it was challenging to generate the ASCII version any other way. We stuck with nroff.

Source format change

Earlier this year I blogged about how we finally changed the format of all the curl documentation files that create man pages in the curl project. We switched over to using markdown all over.

Even after that switch we still generated the built-in manual with nroff from the curl.1 manpage, that then was created entirely from a large set of source files written in markdown. The manpage was generated by our own custom tool.

The time was ripe

With firm control of the input file format and generating the output entirely with our own tool, it became a viable (and attractive) option to tweak the tool to offer an alternative output format. Allow it to render the output either as a manpage formatted file, or as an ASCII text file. Without involving or using nroff.

The time had come. We had suffered long enough. It was time to address this friction in the build system.

Yesterday, I merged the pull-request that finally, after 25 years, 2 months, 21 days removed the use of nroff from the curl build scripts.

The curl -M output after this change is not 100% identical, but it is close enough and looks very good and similar in style as before. I did not actually even try to make it a complete clone. In fact, when we generate the output directly from markdown instead of going via the manpage, we can actually make it a better text-only version than we could before.

I opted to still use a justified right margin of the text, because that is what it always used and after some casual initial comparisons I think it looked better than without an aligned right column.

nroff does hyphenation of words, which helps somewhat to make justified text easier and nicer, and our own script does not – at least not until I have figured out a decent way to do it. Like if the word “variable” is the last word on a line, it could be written as “vari-” on the end of the line and “able” could start the next line. I believe doing it badly is worse than not doing it at all.

Building this is easier

It is now (much) easier to build this from source, even on esoteric platforms like Windows.

I don’t think a single person will miss the old way of doing this.

curl HTTP/3 security audit

An external security audit focused especially on curl’s HTTP/3 components and associated source code was recently concluded by Trail of Bits. In particular on the HTTP/3 related curl code that uses and interfaces the ngtcp2 and nghttp3 libraries, as that is so far the only HTTP/3 backend in curl that is not labeled as experimental. The audit was sponsored by the Sovereign Tech Fund via OSTIF.

The audit revealed no major discoveries or security problems but led to improved fuzzing and a few additional areas are noted as suitable to improve going forward. Maybe in particular in the fuzzing department. (If you’re looking for somewhere to contribute to curl, there’s your answer!)

The audit revealed that we had accidentally drastically shrunk the fuzzing coverage a while back without even noticing – which we of course immediately rectified. When fixed, we fortunately did not get an explosion in issues (phew!), which thus confirmed that we had not messed up in any particular way while the fuzzing ability had been limited. But still: several man weeks of professional code inspection and no serious flaws were detected. I am thrilled over this fact.

Because of curl’s use of third party libraries for doing QUIC and HTTP/3, the report advises that there should be follow-up audits of the involved libraries. Fair proposal, but that is of course something that is beyond what we as a project can do.

Trail of Bits is professional and a pleasure to work with. Now having done it twice, I have nothing but good things to say about the team we have worked with.

From curl’s side, I would like to also highlight and thank Stefan Eissing and Dan Fandrich for participating in the process.

The full report is available on the curl website, here.

The third

This is (quite fittingly since it is for HTTP/3) the third external security audit performed on curl source code, even if this was more limited in scope than the previous ones done in 2016 and 2022. Quite becomingly, the amount of detected important issues have decreased for every new audit. We love scrutiny and we take security seriously. I think this shows in the audit reports.

Related

OSTIF’s blog about the audit.

Image

The top image is a mashup of the official curl logo and the official IETF HTTP/3 logo. Done by me.

DISPUTED, not REJECTED

I keep insisting that the CVE system is broken and that the database of existing CVEs hosted by MITRE (and imported into lots of other databases) is full of questionable content and plenty of downright lies. A primary explanation for us being in this ugly situation is that it is simply next to impossible to get rid of invalid CVEs.

First this

I already wrote about the bogus curl CVE-2020-1909 last year and how it was denied being rejected because someone without a name at MITRE obviously knows the situation much better than any curl developer. This situation then forces us, the curl project, to provide documentation to explain how this is a documented CVE but it is not a vulnerability. Completely contrary to the very idea of CVEs.

A sane system would have a concept where rubbish is scrubbed off.

Now this

The curl project registered for and became a CNA in mid January 2024 to ideally help us filter out bad CVE input better. The future will tell if this effort works or not. (It was also recently highlighted that the Linux kernel is now also a CNA for similar reasons and I expect to see many more Open Source projects go the same route.)

However, in late December 2023, just weeks before we became CNA, someone (anonymous again) requested a CVE Id from MITRE for a curl issue. Sure enough they were immediately given CVE-2023-52071, according to how the system works.

This CVE was made public on January 30 2024, and the curl project was of course immediately made aware of it. A quick glance on the specifics was all we needed: this is another bogus claim. This is not a security problem and again this is a fact that does not require an experienced curl developer to analyze, it is quite easily discoverable.

Given the history of previous bogus CVEs, I was soon emailed by CVE db companies asking me for confirmations about this CVE and I was of course honest and told them that no, this is not a security problem. Do not warn your users about this.

We are a CNA now, meaning that we should be able to control curl issues better, even if this CVE was requested before we were officially given the keys to the kingdom. We immediately requested this CVE to be rejected. On the grounds that it was wrongly assigned in the first place.

“Will provide some confusion”

In the first response from MITRE to our rejection request, they insisted that:

We discussed this internally and believe it does deserve a CVE ID. If we transfer, and Curl REJECTS, then the reporter will likely come back to us and dispute which will provide some confusion for the public.

They actually think putting DISPUTED on the issue is less confusing to the public than rejecting it, because rejecting risks an appeal from the original reporter?

They say in this response that they think it actually deserves a CVE Id. If there was any way to have a conversation with these guys I would like to ask them on what grounds they base this on. Then lecture them on how the world works.

This communication has only been done indirectly with MITRE via our root CNA (Red Hat).

DISPUTED vs REJECTED

So it did not fly.

According to the MITRE guidelines: When one party disagrees with another party’s assertion that a particular issue is a vulnerability, a CVE Record assigned to that issue may be designated with a “DISPUTED” tag.

If someone says the earth is flat, we need to say that fact is disputed? No it is not. It is plain wrong. Incorrect. Bad. Stupid. Silly. Remove-the-statement worthy.

This meant I needed to take the fight to the next level. This policy is not good enough and it needs to be adjusted. This is not a disagreement on the facts. I insist that this is not a vulnerability to begin with. It was wrongly assigned a CVE in the first place. It feels ridiculous that the burden of proof falls on me to prove how this is not a security problem instead of the other way around: if someone would just have had the spine to ask the original submitter to explain, prove, hint or suggest how this is a vulnerability then it would never have been a CVE created for this in the first place. Because that person could not have done that.

The plain truth is that there is no system for doing this. There is no requirement on the individual to actually back up or explain what they claim. The system is designed for good-faith reporters against bad-faith product organizations. So that bad companies cannot shut down whistleblowers basically. Instead it allows irresponsible or bad-faith reporters populate the CVE database with rubbish.

Once the CVE is in, the product organization, like curl here, is not allowed to REJECT it. We have to go the lame route and say that the facts in the CVE are DISPUTED. We are apparently in disagreement whether the totally incorrect claim is totally incorrect or not. Bizarre.

Did I mention this is a broken system?

Elevated

Being a CNA at least means we have a foot in the door. An issue has been filed against the policy and guidelines and it has been elevated at MITRE via our root CNA (Red Hat). I cannot say if this eventually will make a difference or not, but I have decided to “take one for the team” and spend this time and effort on this case in the belief that if we manage to nudge the process ever so slightly in the right direction, it could be worth it.

For the sake of everyone. For the sake of my sanity.

Documented

In the curl documentation for CVE-2023-52071, which we unwittingly have to provide even though the issue is bogus, I have included this whole story including quoting the motivations from my email to MITRE as to why this CVE should be rejected in spite of the current procedure not allowing us to.

Future

Hopefully, supposedly, ideally, crossing my fingers, future CVEs against curl or libcurl will immediately be passed via us since we are now a CNA. This is how it is supposed to work. We will of course immediately and with no mercy reject and refuse all attempts in filing silly CVEs for issues that aren’t vulnerabilities.

The “elevated issue” above might (hopefully) lead to non-CNA organizations getting an increased ability to filter off junk from the system – and then perhaps lessen the need for the entire world to become CNAs. I am not overly optimistic that we will reach that position anytime soon, as clearly the system has worked like this for a long time and I expect resistance to change.

I can almost guarantee that I will write more blog posts about CVEs in the future. Hopefully when I have great news about updated CVE rejection policies.

Update

(Feb 23, 21:33 UTC) The CVE records have now been updated by MITRE and according to NVD for example, this CVE is now REJECTED. Wow.

I was not told about this, someone in a discussion thread mentioned it.

Contingency planning for me and curl

This is a frequently asked question: how will I handle the situation if/when I step away from the curl project? What happens if I get run over by a bus go on a permanent holiday tomorrow? What’s the contingency plan?

You would perhaps think that it could affect a few more things that I work on than just curl, but I rarely get questions about any other things or projects. But okay, I have since long accepted that curl is the single thing people are most likely to associate with me.

I’m not leaving

Let me start by saying that I have no plans to leave the curl project any time soon. curl is such a huge part of my life I would not know what to do if I did not spend a large chunk of it thinking about, talking about, blogging about and working on curl development. I am not ruling out that I might step back as a leader of the project in a distant future, but it sure does not feel like it will happen within the nearest decade.

I am far from done yet. curl is not done yet. The Internet has not stopped evolving yet.

Also: the most likely way I will leave the project in a distant future is slowly and in a controlled manner where I can make sure that everyone gets everything they need before I would completely disappear into the shadows.

This is not a solo show

I also want to stress that curl is not a solo mission. We have surpassed 1200 commit authors in total and we average in 25 commit authors every month, with about 10 new committers arriving every month. My share of all commits has been continuously shrinking for many years.

Documented

A healthy and striving open source project should stand on its own legs and not rely on the presence or responses of single contributors. Everything should be documented and explained. How things work in the code, but also how processes work and how decisions are made etc. Someone who arrives at the project, alone in the middle of the night without network access, should be able to figure out everything without having to ask anyone.

I work hard at documenting everything curl as much and as well as possible. My ambition is to have curl stand out as one of the best documented projects/products – no matter what you compare it against.

Distributed responsibilities

If a single maintainer vanishes tomorrow, the project should survive it fine. Redundancy is key and we must make sure that we have a whole team of people with the necessary rights and knowledge to “carry the torch forward”. We invite new maintainers to the team every once in a while so that we are at least a dozen or so that can do things like merging code into the repository or updating the website. Many of those rarely exercise that right, but they have it and they can.

A single maintainer’s sudden absence can certainly be a blow to the project, but it should not be lethal.

My “BDFL role” in curl is not enforced by locking others out. There is a whole team that can do just about everything in the project that I do. When and if they want to.

Accounts

I have logins and credentials to some services that the whole team does not. I use them to upload curl releases, manage the website and similar. My accounts. If I am gone tomorrow, getting into my accounts will offer challenges to those who want to shoulder those responsibilities. I have a few trusted dedicated individuals appointed to hopefully manage that in the unlikely event that ever becomes necessary.

BDFL

(Benevolent Dictator For Life)

I may be a sort of dictator in the project, but I prefer to see myself as a “lead developer” as I hardly ever veto anything and I always encourage discussions and feedback rather than decreeing my opinions or ways of working onto others. I strive to be benevolent. I do not claim to always know the correct or proper way to do things.

When I leave, there is no dedicated prince or appointed heir that will take over after me, royal family style. Sure, someone else in the ranks of existing maintainers might step up and become a new project leader but it could also very well just become a group sharing the load or something else. It is not up to me to decide or control that. It is not decided ahead of time and it will not.

Similarly, I don’t try to carve my vision of curl into some stone tablets to pass on to the next generation. When I am gone, the people who remain will need to drive the ship and have their own visions and ideas. The kids got to do their own choices.

Legacy

I don’t care about how or if people remember me or not. I try my best to do good now and I hope my efforts and work make a net positive to the world. If so, that is good enough for me.

FOSDEM 2024: you too could have made curl

This is the video recording of my talk with this title, done at February 4, 2024 10:00 in the K1.105 room at FOSDEM 2024. The room can hold some 800 people but there were a few hundred seats still unoccupied. Several people I met up with later have insisted that 10 am on a Sunday is way too early for attending talks…

When I was about to start my talk, the slides would not show on the projector. Yeah, sigh. Nothing surprising maybe, but you always hope you can avoid these problems – in particular in the last moment with a huge audience waiting.

There was this separate video monitor laptop that clearly showed that my laptop would output the correct thing – in a proper resolution (1280 x 720 as per auto-negotiation), but the projector refused to play ball. The live stream could also see my output, so the problem was somehow from the video box to the projector.

Several people eventually got involved, things were rebooted multiple times, cables were yanked and replugged in again. First after I installed arandr and forced-updated the resolution of my HDMI output to 1920×1080 the projector would suddenly show my presentation. (Later on I was told that people had the same problem in this room the day before…)

That was about nine minutes of technical difficulties that is cut out from the recording. Nine minutes to test my nerves and presentation finesse as I had to adapt.

Funding Dan to improve curl tests

A few weeks ago I mentioned how we fund Stefan’s work on improving HTTP(/3) in curl. Now, in similar spirit we are funding Dan Fandrich to work on further improving test infrastructure. Dan has worked fiercely on the introduction of parallel tests over the recent year or so and this is work that builds on that and continues down that road.

This funding is paid for by sponsors and donors, via Open Collective and GitHub sponsors. Thank you all!

Test Analysis System

curl contains a regression test suite of over 1900 individual test cases that are run automatically on every commit submission and on every pull request in almost 130 different environments., meaning that every change can result in more than 140,000 tests being run. A spurious test failure rate of a mere 0.001% is likely to cause a perfectly good PR to end up showing with a red failure. A new contributor that doesn’t understand this problem can spend hours poring over his or her patch and the related code in curl, searching for a problem that isn’t there.

Analyzing 140,000 tests for each change to the curl source code to find failure trends (such as flaky tests) demands an automated solution. Dan has created a system (working name Test Clutch) that has been successfully ingesting curl CI test results for much of the past year and has been used by him to find flaky tests as well as permanently failing tests (often submitted under the mistaken impression that the failed test was merely flaky). It collects individual test results from all the CI systems used in the curl project into a database where they can be analyzed.

This system has potential to be useful to a broader base of curl developers to help see test trends, test platform coverage and to better determine which tests are flaky and could use improvement. It has been written in a fashion such that test results for other projects besides curl can also be added and analyzed separately.

Work Projects

Make Test Clutch available

The current test ingestion and analysis system will be productionized and the analysis summary table will be integrated into the curl web site for easy access for developers.

Assist in PR work

This task will involve writing code to trigger the test analysis system to retrieve detailed PR test results when available. It must make a reasonable determination of when all the expected tests have been completed (since not all tests will run for every PR) then commenting on the PR with a summary of the test results and believability of any test failures.

When

These are project that will benefit the project when implemented but they are not time sensitive and Dan is not going to work full time on them. There are no exact end dates set for them.

The result of Dan’s work will become visible in PRs and website updates as we go forward.

Five year full time curl anniversary

Five years ago now, on February 2nd 2019, I started working for wolfSSL doing curl full time. I have now worked longer for wolfSSL than I previously did for Mozilla.

I have said it before and I will say it again: working full time on curl is my definition of living the dream.

Joining wolfSSL was not just me changing employer, it changed everything for me. First, I am not just a regular employee, I am the lead curl developer and the curl support we offer for commercial customers is unparalleled. No other business or individuals can offer the same level of support, knowledge, experience, insights and ability to merge fixes and changes back into curl mainline.

At wolfSSL we offer commercial services around and support for curl and libcurl. Contract development of new features, debugging, fixing problems and just about every other aspect of getting users get better use of (lib)curl in their products and services.

I think this change has been good for curl and curl project as well. The last five years have seen more and faster development than any other previous five year period. I have been able to work intensely and a lot on curl, When fixing bugs and adding features for customers, but even more just the general improving of things for everyone that the money from support customers makes possible.

curl 8.6.0

Numbers

the 254th release
7 changes
56 days (total: 9,448)

154 bug-fixes (total: 9,888)
257 commits (total: 31,684)
0 new public libcurl function (total: 93)
1 new curl_easy_setopt() option (total: 304)

0 new curl command line option (total: 258)
65 contributors, 40 new (total: 3,078)
36 authors, 18 new (total: 1,237)
1 security fix (total: 151)

Release presentation

Security

CVE-2024-0853: OCSP verification bypass with TLS session reuse. curl inadvertently kept the SSL session ID for connections in its cache even when the verify status (OCSP stapling) test failed. A subsequent transfer to the same hostname could then succeed if the session ID cache was still fresh, which then skipped the verify status check.

Changes

  • Markdown documentation. Most of the libcurl and command line documentation is now written using (basic) markdown instead of previous formats. Easier to read, easier to write.
  • CURLE_TOO_LARGE. A new libcurl error code for when “something” is growing too big to be allowed. Like a URL, a HTTP request or similar. Previously it would return out of memory for those situations which caused confusion to users.
  • CURLINFO_QUEUE_TIME_T. Applications can now ask libcurl how long a transfer was “queued” internally before it actually started.
  • CURLOPT_SERVER_RESPONSE_TIMEOUT_MS. A new millisecond version of the already existing option to allow applications higher resolution control.
  • Use GetAddrInfoExW on >= Windows 8. On current Windows versions libcurl will now do asynchronous name resolving by default without using threads, which should be less resource heavy.
  • libpsl detection failure in configure causes error. If configure cannot find libpsl it will require the user to say that it should not be used, or to fix the problem. To make people who build curl more aware of the PSL state of the build.
  • runtests supports -gl, When you invoke individual test cases on macOS, you can now ask to run it with lldb with -gl just as you have been able to run it with gdb using -g for decades. Helps debugging difficult cases.

Bugfixes

Here some of my favorite bugfixes from this cycle:

configure: add libngtcp2_crypto_boringssl detection. Previously it would only detect and build out of the box with the quictls version of ngtcp2 builds.

configure: when enabling QUIC, check that TLS supports QUIC. More efforts trying to detect wrong and invalid build combinations earlier, to avoid users ending up with broken builds.

all libcurl man page examples are verified in CI. Every man page example now compiles cleanly. This step made us detect and fix numerous tiny mistakes of the most annoying kind: when you copy code from docs and it does not work.

curl shows ipfs and ipns as supported “protocols”. In the regular --version output. Even if they are converted to https:// internally.

curl bsearches command line options. The command line parsing is now magnitudes faster. Of course it will not really be noticeable outside of the most extreme cases.

curl stopped supporting @filename style for --cookie. This syntax was never documented and was not used in any test case. It was risking to cause unwanted surprises.

curl –remove-on-error only removes “real” files. Mostly as a precaution for when users are unclever enough to run curl with elevated privileges and would save to a device or named pipe etc.

curl no longer sets the file comment on Amiga. It would truncate the URL weirdly and also risk leak credentials if such were used in the URL.

lib: reduced use of the download buffer all over. The download buffer has over time been abused for all kinds of buffer purposes. This cycle we have made a lot of such buffer use instead start use their own buffers. With a little luck, this will make us possible use a single download buffer for all transfer in a multi handle, thereby drastically reducing the amount of used memory when doing parallel transfers. With no behavior difference or performance degradation. Details on this will follow later.

lib: use memdup0 instead of malloc + memcpy. This was a common code pattern, and with this we reduce the number of mallocs and memcpys at the same time – which we think is good since they are known “problem functions” that are easy to mess up.

lib: various conversions from malloc to dynbuf. In similar spirit as the above, we continued to switch more functions away from using malloc and family to instead use the internal dynbuf API for managing dynamic buffers in a way that is less likely to cause memory related issues.

resolving: with modern c-ares, use its default timeout. It means tighter timeouts by default but also that this combo now also respect the timeout that can be specified in resolve.conf.

headers API: make sure the trailing newline is never stored. A header with no content on the right side of the colon would erroneously get its trailing newline stored as content

mprintf: overhaul, performance and bugfixes. Now the curl printf functions work even more similar to the glibc counterparts especially when provided illegal %-combinations and when using the <num>$ operator. Performance measurements on this new code also says this code now executes around 30% faster on commonly used format strings.

ftp: handle the PORT parsing without allocation. Minor cleanup.

http3: initial support for OpenSSL 3.2 QUIC stack. The forth QUIC backend in curl is here.

http: check for “Host:” case insensitively. If you would ask to disable this header with a different casing that what was compared, it would still send an empty header in the request.

http: only act on 101 responses when they are HTTP/1.1. If a HTTP response says another protocol version with a 101 response, it is now considered an illegal combination.

openldap: fix an LDAP crash. LDAP without TLS would crash on basic use.

openldap: fix STARTTLS. It was recently broken in a refactor.

Next

There are no revolutionary changes in the pipe, but there are a series of things we most likely are going to land in the next cycle making the next version number likely to become 8.7.0.