Posts

Sort by:
Post not yet marked as solved
0 Replies
19 Views
I have an executable in macOS that I m launching as a User Agent. The same executable can be launched in multiple ways like either user can directly click the exe to launch it, or user can launch it from the terminal using ./ etc. One similar way is when the user launches the exe as a User Agent(i.e daemon in user session). In this scenarios, I want to identify in my exe If my user has launched it as agent to perform certain task. I wanted to know how can I accurately determine this? I have tried figuring out If there is some unique session that agents operate in, but I could not find anything. Can someone help here? Is this even possible?
Posted
by
Post not yet marked as solved
1 Replies
29 Views
Why does this Regex Builder code in my SwiftUI app not work? I'm parsing a string that might be a date and time with either AM or PM specified for the time. This bit of code looks for the optional AM or PM. The error I get is: The compiler is unable to type-check this expression in reasonable time; try breaking up the expression into distinct sub-expressions What would 'distinct sub-expressions' mean in this case? The code: let ampmRef = Reference<Substring>() let ampmReg = Regex { Capture(as: ampmRef) { ZeroOrMore { ChoiceOf { One("am") One("pm") } } } transform: { $0.lowercase } }.ignoresCase() In a related question, is there a way to return a default if the ChoiceOf fails both AM and PM?
Posted
by
Post not yet marked as solved
0 Replies
29 Views
I've started getting reports of this today and I am able to replicate it on my end but looking to see if anyone else can verify or if it's possibly regional to me (Canada). In Apple Maps (iOS or macOS), if you search a latitude and longitude -- for example: "49.110,-112.110" and search, it centers on the location as it always has and shows the "Directions" button. When you tap the directions button, I get "A route can't be shown because of a problem connecting to the server.". Alternatively, if you pass the coordinate in via Apple Maps URL (https://maps.apple.com/?daddr=49.110,-112.110) it will route but the route is no longer to those specific coordinates, Apple Maps alters them to some nearest known entity (in this case, the RM of Warner County). If you compare the suggested route end destination with the search results for specifically entering the coordinates, you will see they are different locations and mapping routes are not actually taking you to the coordinates anymore. In the last photo attached, the arrow points to where "49.110,-112.110" is actually located which tapping the "Directions" button cannot figure out a route because of a server issue. If you pass it in via URL, it changes the destination coordinates and begins a route quite a ways away from the intended coordinate. The problem started happening either this morning or last night. Can anyone else confirm this happens to them? Thanks, Mike
Posted
by
Post not yet marked as solved
1 Replies
48 Views
Dear developers, now that we have played with Vision Pro for 3 months, I am wondering why some features are missing on Vision Pro, especially some seem to be very basic/fundamental. So I would like to see if you know more about the reasons or correct me if I'm wrong! You are also welcome to share features that you think is fundamental, but missing on Vision Pro. My list goes below: (1) GPS/Compass: cost? heat? battery? (2) Moving image tracking: surrounding environment processing is already too computing intensive? (3) 3D object tracking: looks like only supported on iOS and iPadOS, but not visionOS (4) Does not invoke application focus/pause callback: maybe I'm wrong? But we were not able to detect if an app has been put on background or brought to foreground to invoke a callback
Posted
by
Post not yet marked as solved
0 Replies
42 Views
I'm trying to understand better how to 'navigate' around a large USD scene inside a RealityView in SwiftUI (itself in a volume on VisionOS). With a little trial and error I have been able to understand scale and translate transforms, and I can have the USD zoom to 'presets' of different scale and translation transforms. Separately I can also rotate an unscaled and untranslated USD, and have it rotate in place 90 degrees at a time to return to a rotation of 0 degrees. But if I try to combine the two activities, the rotation occurs around the center of the USD, not my zoomed location. Is there a session or sample code available that combines these activities? I think I would understand relatively quickly if I saw it in action. Thanks for any pointers available!
Posted
by
Post not yet marked as solved
0 Replies
46 Views
I'm experiencing an issue with WKWebView and localStorage. I've set up a standard WKWebView with the configuration: configuration.websiteDataStore = WKWebsiteDataStore.default() Everything works fine in the emulator (iOS 16.x, 17.0), but on my iPhone 13 running iOS 17.4, I encounter a problem. When I set a localStorage value on my local HTML page, navigate to another URL within the webview, and then return to the original page, the localStorage is cleared. This behavior is new and wasn't happening before. Has anyone else encountered this or have any suggestions on how to fix it? The localstorage should be persistent as it always has been.
Posted
by
Post not yet marked as solved
0 Replies
48 Views
I have noticed that my Apple Watch app seems to randomly quit from time to time. It's not crashing and I have not been able to reproduce it in a controlled setting, but have noticed that it seems to only happen when I'm in very high or low temperatures. For instance, I was skiing recently and my app was supposed to stay running the whole time, but often when I would raise my wrist it would be back on the home screen and my app wasn't running. This also happened when I was on the beach on a very hot day. But when I'm testing it at home I can keep it running for hours and it never crashes, which leads me to believe it may have to do with the temperature. Does the OS kill apps when it's running in very high or low temperatures? If so, is there anything I can do to prevent this from occurring? Would doing less things in my app possibly prevent this? For instance, I have a timer, and use a bunch of sensors, would turning those off at times and using less of the display make a difference or does the OS not care what the apps are actually doing? If not, any other ideas? Thanks
Posted
by
Post not yet marked as solved
0 Replies
46 Views
Hello everyone, I hope you’re all doing well. I have a question regarding the use of Apple's Find My network. I’m in the early stages of developing an app that would track third-party Find My-compatible tags. Before proceeding further, I want to ensure that I am compliant with Apple’s guidelines and policies. Can anyone provide insight into whether Apple allows developers to use the Find My crowd-sourced network for their own apps? Specifically, I'm interested in tracking third-party Find My tags through my app. Any guidance or resources you can share would be greatly appreciated! Thank you!
Posted
by
Post not yet marked as solved
0 Replies
48 Views
I have built the app on xcode and deployed it on my ios device. I am using a personal team account using my apple id. It is an xcode managed profile. I go to VPN & Device Management and accept the developer which works, but then when I hit "Verify App" there is a blink and nothing happens and no error code. I have fine internet I am able to access websites and I do not have a VPN.
Posted
by
Post not yet marked as solved
0 Replies
46 Views
Hi, I am new to Swift and would like to write a simple Swift script to show some HDR images or video on my VisionPro. I tried to find some code online as shown in the attachment to put one HDR image and one SDR side by side, but it seems like not to take any HDR effect. Thanks in advance.
Posted
by
Post not yet marked as solved
0 Replies
50 Views
Recently, I have been using SwiftData as a data persistence tool in my new SwiftUI app. My app utilizes CLLocationManager for background location tracking to wake up the app and update SwiftData-related data. When the app is in the foreground or background, SwiftData's @Query can normally retrieve data. However, once the user manually terminates the app and it receives a location-based wake-up, SwiftData's @Query no longer retrieves data. I have looked through many resources and haven't found any similar documentation. I wonder if anyone can give me some suggestions?
Posted
by
Post not yet marked as solved
1 Replies
56 Views
I have an extremely straightforward situation where an @IBOutlet in a ViewController is connected to a property in an XIB file. I've been working with iOS apps for more than ten years, and done this about a million times. For some reason, the property becomes nil at some point after the view is loaded. I can check with the debugger to see that it is not nil at viewDidLoad, and there is nothing in my code that sets it to anything else. I added a custom setter and getter to the variable so that I could stop in the debugger when it gets set, and the setter only gets called once, with a non-nil value. I suspect that somehow, a different copy of my ViewController is getting instantiated, but when it does, there are no calls to any of the usual methods like viewDidLoad. In fact there is not even a call to the init method. I don't understand how this is possible.
Posted
by
Post not yet marked as solved
0 Replies
61 Views
If I set the capacity of the disk cache to less than 5MB, It doesn't work. Through the print statement, I checked that the value of the currentDiskUsage did not rise at all, and I also checked that the image has been making network requests every time because there is no cached data even if I shut down and run the app again. I'm simply wondering why this is happening. Also, I wonder what kind of eviction policy the disk cache follows. I was so curious that I tried to find out through the link [here], but there seems to be no implementation of disk cache at all. Below is the code I used. I'm attaching it together just in case. import UIKit protocol Cacheable { func getCachedResponse( for path: String, completion: @escaping (Result<Data, CacheError>) -> Void ) func save( for path: String, data: Data ) } final class CacheManager { static let shared = CacheManager() private let imageCache: URLCache init() { imageCache = URLCache( memoryCapacity: 4 * 1024 * 1024, // 4MB diskCapacity: 4 * 1024 * 1024 // 4MB ) } } extension CacheManager: Cacheable { func getCachedResponse( for path: String, completion: @escaping (Result<Data, CacheError>) -> Void ) { if let url = URL(string: path), let cachedResponse = imageCache.cachedResponse(for: URLRequest(url: url)) { completion(.success(cachedResponse.data)) return } completion(.failure(.noCachedResponse)) } func save( for path: String, data: Data ) { guard let url = URL(string: path) else { return } let response = URLResponse( url: url, mimeType: nil, expectedContentLength: 0, textEncodingName: nil ) if let uiImage = UIImage(data: data), let compressedData = uiImage.jpegData(compressionQuality: 0.8) { #if DEBUG let formmatter = ByteCountFormatter() formmatter.allowedUnits = [.useMB] formmatter.countStyle = .file print(""" === Original size: \(formmatter.string(fromByteCount: Int64(data.count))) === Cached size: \(formmatter.string(fromByteCount: Int64(compressedData.count))) """) #endif let cachedResponse = CachedURLResponse( response: response, data: compressedData ) imageCache.storeCachedResponse( cachedResponse, for: URLRequest(url: url) ) } } }
Posted
by
Post not yet marked as solved
0 Replies
51 Views
Hi, I am testing a consumable in-app purchase on my app, with a Sandbox account on an iPad device. The transaction was successful, as I saw "You're all set. Your purchase was successful. [Environment: Sandbox]. I set a break point in Xcode after the line await transaction.finish() in following code private func handle(transactionVerification result: VerificationResult ) async { switch result { case let .verified(transaction): guard let product = self.products.first(where: { $0.id == transaction.productID }) else { return } self.addPurchased(product) await transaction.finish() return. <----- breakpoint And I saw those property values for the transaction id UInt64 88*****848 originalID UInt64 437****2496 . Then I use the originalID value 437*****2496 in a server library call in node.js .... const environment = Environment.SANDBOX .... const client = new AppStoreServerAPIClient(encodedKey, keyId, issuerId, bundleId, environment) .... const response = await client.getTransactionInfo("4379072496") I got apiError: 4040010, errorMessage: 'Transaction id not found.' Could someone please tell me if I use the library call correctly with the right id? And why I got the error? Thank you very much! Kind regards, Shih-Chin Yang [Edited by Moderator]
Posted
by
Post not yet marked as solved
1 Replies
66 Views
Hello, can someone please explain to me how does SwiftUI TabView works "under the hood" , I don't understand why do all views in TabView get reinitialized each time I switch between tabs. Xcode Version 15.3, iOS 16+ Below is the code snippet : struct ScreenOne: View { init() { print("ScreenOne init called !") } var body: some View { Text("This is screen one!") } } struct ScreenTwo: View { init() { print("ScreenTwo init called !") } var body: some View { Text("This is screen two !") } } struct TabViewTest: View { @State var selectedIndex: Int = 1 var body: some View { TabView(selection: $selectedIndex) { ScreenOne() .tag(1) .tabItem { Text("Item 1") } ScreenTwo() .tag(2) .tabItem { Text("Item 2") } } // .onChange(of: selectedIndex) { oldValue, newValue in // // } // NOTE: When code above is uncommented // Screen one & Screen two initializers get called // each time switch to different tab occurs } } Snippet output with the commented out code : App loads Both print statements get called -> "ScreenOne init called ! & "ScreenTwo init called ! Switch between taps Nothing happens Snippet output with the uncommented code : App loads Both print statements get called -> ScreenOne init called ! & ScreenTwo init called ! Switch between taps Both print statements get called -> ScreenOne init called ! & ScreenTwo init called ! @eskimo heeelp :) Thanks in advance !
Posted
by
Post not yet marked as solved
0 Replies
57 Views
Hi, i have been noticing some strange issues with using CoreML models in my app. I am using the Whisper.cpp implementation which has a coreML option. This speeds up the transcribing vs Metal. However every time i use it, the app size inside iphone settings -> General -> Storage increases - specifically the "documents and data" part, the bundle size stays consistent. The Size of the app seems to increase by the same size of the coreml model, and after a few reloads it can increase to over 3-4gb! I thought that maybe the coreml model (which is in the bundle) is being saved to file - but i can't see where, i have tried to use instruments and xcode plus lots of printing out of cache and temp directory etc, deleting the caches etc.. but no effect. I have downloaded the container of the iphone from xcode and inspected it, there are some files stored inthe cache but only a few kbs, and even though the value in the settings-> storage shows a few gb, the container is only a few mb. Please can someone help or give me some guidance on what to do to figure out why the documents and data is increasing? where could this folder be pointing to that is not in the xcode downloaded container?? This is the repo i am using https://github.com/ggerganov/whisper.cpp the swiftui app and objective-C app both do the same thing i am witnessing when using coreml. Thanks in advance for any help, i am totally baffled by this behaviour
Posted
by
Post not yet marked as solved
0 Replies
62 Views
Hi, I am testing an consumable in-app purchase for my app on an iPad, whenever I select to purchase, it always shows "For testing purpose only. You will not be charged for confirming this purchase.". Then I touch the blue Purchase button. It instantly shows "Done", then alerts "You're all set. Your purchase was successful. [Environment: Xcode]. It never asks me to enter a Sandbox account. I followed the instructions on "Testing in-app purchases with sandbox" page, but I can not find the sandbox account in Settings > App Store. I expected to see [Environment: Sandbox] so I could get the transaction id for App Store Server API. My iPadOS version is 17.4.1. My Xcode version is 15.1 and I use StoreKit with SwiftUI view. Can someone please shed some light on why I always get [Environment: Xcode]? I googled a lot, the process to test with Sandbox seems to be straightforward, But I just could not get it right. Thank you very much! KInd Regards, Shih-Chin Yang
Posted
by
Post not yet marked as solved
0 Replies
63 Views
Hi guys, I'm investigating failure to play low latency Live HLS stream and I'm getting following error: (String) “<AVPlayerItemErrorLog: 0x30367da10>\n#Version: 1.0\n#Software: AppleCoreMedia/1.0.0.21L227 (Apple TV; U; CPU OS 17_4 like Mac OS X; en_us)\n#Date: 2024/05/17 13:11:46.046\n#Fields: date time uri cs-guid s-ip status domain comment cs-iftype\n2024/05/17 13:11:16.016 https://s2-h21-nlivell01.cdn.xxxxxx.***/..../xxxx.m3u8 -15410 \“CoreMediaErrorDomain\” \“Low Latency: Server must support http2 ECN and SACK\” -\n2024/05/17 13:11:17.017 -15410 \“CoreMediaErrorDomain\” \“Invalid server blocking reload behavior for low latency\” -\n2024/05/17 13:11:17.017 The stream works when loading from dev server with TLS 1.3, but fails on CDN servers with TLS 1.2. Regular Live streams and VOD streams work normally on those CDN servers. I tried to configure TLSv1.2 in Info.plist, but that didn't help. When running nscurl --ats-diagnostics --verbose it is passing for the server with TLS 1.3, but failing for CDN servers with TLS 1.2 due to error Code=-1005 "The network connection was lost." Is TLS 1.3 required or just recommended? Refering to https://developer.apple.com/documentation/http-live-streaming/enabling-low-latency-http-live-streaming-hls and https://datatracker.ietf.org/doc/html/draft-pantos-hls-rfc8216bis Is it possible to configure AVPlayer to skip ECN and SACK validation? Thanks.
Posted
by
Post not yet marked as solved
0 Replies
57 Views
Our app is a time reporting service with various functions around that. The user checks in at work, checks out when they go home. We thought it'd be useful to provide a live activity to show how long they have worked for. There is also a couple of other cool things we could do that users would love, but i couldn't find definitive answers to the questions below. 1. We have a geofence-based function that checks the user in when they for example arrive at work, and check them out when they go home, so that they don't have to open the app. However, this means that we will need to start and end the live activity from within a geofence trigger. Is this possible? 2. It seems that the maximum time for a live activity is 8 hours? Sometimes people work for longer... How would we solve this? i would be fine with 12 since it would solve most cases. Is it possible somehow to go beyond 8 hours up to 12? If not, is there a callback that "8 hours are up!" so that i could do a final update on the live activity from a counter to "you started working at 09:04" 3. I have seen that some live activities have buttons. It would be neat if the user can check out via a button on the live activity. However, since we take location and call our servers when checking out, we need to be able to use both the locationmanager and make a network call from the live activity. Is this possible? Thanks in advance, Cheers
Posted
by

TestFlight Public Links

Get Started

Pinned Posts

Categories

See all